text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
rc.local not working - James Thomas so i'm trying to do 2 things with my omega on boot, set gpio mux and then run a python script that is to run for the lifetime of the device. my rc.local script contains the following: # the system init finished. By default this file does nothing. # setup the gpio mux sh /root/setup.sh >> /tmp/setup.log 2>&1 # run the main program python /root/Flash.py >> /tmp/flash.log 2>&1 & exit 0 the setup script is omega2-ctrl gpiomux set i2c gpio omega2-ctrl gpiomux set i2s gpio and the python script is: import onionGpio import time import subprocess import os import sys gpio5 = onionGpio.OnionGpio(5) gpio5.setInputDirection() print 'Ready' while 1==1: #print 'GPIO5 set to: %s'%(value) if int(gpio5.getValue()) == 0: print 'starting flash' os.system("gpioctl dirout-low 4") os.system("sh /usr/UserFolder/ShellScript.sh") os.system("gpioctl dirout-high 4") print 'finish flashing' # time.sleep(5) else: time.sleep(.05) these 2 scripts are not running. I can verify this by getting the gpio mux after boot and see that they are still the default, not my changes. They have been granted execute permissions and work flawlessly when ran individually from the shell. What am i missing? - Jon Gordon Is there any possible permission issues with the log files? What happens if you remove the logging output and just run the scripts? What happens if you remove setup.sh and place the two gpiomux commands directly in rc.local? - Fabian Nydegger 0 Hi, @James-Thomas said in rc.local not working: > # run the main program > python /root/Flash.py >> /tmp/flash.log 2>&1 & remove the second <&> you don't need it python /root/Flash.py >> /tmp/flash.log 2>&1 Anyway you will have a problem logging files for a life time in /tmp. If you don't reboot your omega with cron every X days. The device will just stop working after a while, no space left... You know theres a python module for gpio? - James Thomas @Fab.
http://community.onion.io/topic/2952/rc-local-not-working/4
CC-MAIN-2019-04
refinedweb
348
77.33
The delegate topic seems to be a confusing and tough for most of the developers. In this article I will explain the basics of delegates and Event handling in C# in a simple manner.. There might:public delegate int DelegateMethod(int x, int y);. In class we create its object, which is instance, but in delegate when we create instance that is also referred as delegate (means whatever you do you will get delegate). Delegate does not know or care about the class of the object that it references. Any object will do; all that matters is that the method's argument types and return type match the delegate's. This makes delegates perfectly suited for "anonymous" invocation. In simple words delegates are object oriented and type-safe and very secure as they ensure that the signature of the method being called is correct. Delegate helps in code optimization. 1) Singlecast delegates 2) Multiplecast delegates Delegate is a class. Any delegate is inherited from base delegate class of .NET class library when it is declared. This can be from either of the two classes from System.Delegate or System.MulticastDelegate. Singlecast delegate point to single method at a time. In this the delegate is assigned to a single method at a time. They are derived from System.Delegate class. When a delegate is wrapped with more than one method that is known as a multicast delegate. In C#, delegates are multicast, which means that they can point to more than one function at a time. They are derived from System.MulticastDelegate class. There are three steps in defining and using delegates: 1. Declaration To create a delegate, you use the delegate keyword. [attributes] [modifiers] delegate ReturnType Name ([formal-parameters]); The attributes factor can be a normal C# attribute. The modifier can be one or an appropriate combination of the following keywords: new, public, private, protected, or internal. The ReturnType can be any of the data types we have used so far. It can also be a type void or the name of a class. The Name must be a valid C# name. Because a delegate is some type of a template for a method, you must use parentheses, required for every method. If this method will not take any argument, leave the parentheses empty. Example: public delegate void DelegateExample(); The code piece defines a delegate DelegateExample() that has void return type and accept no parameters. 2. Instantiation DelegateExample d1 = new DelegateExample(Display); The above code piece show how the delegate is initiated 3. Invocation d1(); The above code piece invoke the delegate d1(). using System; namespace ConsoleApplication5 { class Program { public delegate void delmethod(); public class P { public static void display() { Console.WriteLine("Hello!"); } public static void show() Console.WriteLine("Hi!"); public void print() Console.WriteLine("Print"); } static void Main(string[] args) // here we have assigned static method show() of class P to delegate delmethod() delmethod del1 = P.show; // here we have assigned static method display() of class P to delegate delmethod() using new operator // you can use both ways to assign the delagate delmethod del2 = new delmethod(P.display); P obj = new P(); // here first we have create instance of class P and assigned the method print() to the delegate i.e. delegate with class delmethod del3 = obj.print; del1(); del2(); del3(); Console.ReadLine(); } } namespace delegate_Example4 public delegate void delmethod(int x, int y); public class TestMultipleDelegate public void plus_Method1(int x, int y) Console.Write("You are in plus_Method"); Console.WriteLine(x + y); public void subtract_Method2(int x, int y) Console.Write("You are in subtract_Method"); Console.WriteLine(x - y); TestMultipleDelegate obj = new TestMultipleDelegate(); delmethod del = new delmethod(obj.plus_Method1); // Here we have multicast del += new delmethod(obj.subtract_Method2); // plus_Method1 and subtract_Method2 are called del(50, 10); Console.WriteLine(); //Here again we have multicast del -= new delmethod(obj.plus_Method1); //Only subtract_Method2 is called del(20, 10); Point to remember about Delegates: Usage areas of delegates An Anonymous Delegate You can create a delegate, but there is no need to declare the method associated with it. You do not have to explicitly define a method prior to using the delegate. Such a method is referred to as anonymous. In other words, if a delegate itself contains its method definition it is known as anonymous method. public delegate void Test(); public class Program static int Main() Test Display = delegate() Console.WriteLine("Anonymous Delegate method"); }; Display(); return 0; }}Note: You can also handle event in anonymous method. Event and delegate are linked together. Event is a reference of delegate i.e. when event will be raised delegate will be called. In C# terms, events are a special form of delegate. Events are nothing but change of state. Events play an important part in GUI programming. Events and delegates work hand-in-hand to provide a program's functionality. A C# event is a class member that is activated whenever the event it was designed for occurs. It starts with a class that declares an event. Any class, including the same class that the event is declared in, may register one of its methods for the event. This occurs through a delegate, which specifies the signature of the method that is registered for the event. The event keyword is a delegate modifier. It must always be used in connection with a delegate. The delegate may be one of the pre-defined .NET delegates or one you declare yourself. Whichever is appropriate, you assign the delegate to the event, which effectively registers the method that will be called when the event fires.. obj.MyEvent += new MyDelegate(obj.Display); An event has the value null if it has no registered listeners. Although events are mostly used in Windows controls programming, they can also be implemented in console, web and other applications. namespace delegate_custom public delegate void MyDelegate(int a); public class XX public event MyDelegate MyEvent; public void RaiseEvent() MyEvent(20); Console.WriteLine("Event Raised"); public void Display(int x) Console.WriteLine("Display Method {0}", x); XX obj = new XX(); obj.MyEvent += new MyDelegate(obj.Display); obj.RaiseEvent(); using System.Collections.Generic; using System.Linq; using System.Text; namespace delegate_custom_multicast public delegate void MyDelegate(int a, int b); public void RaiseEvent(int a, int b) MyEvent(a, b); public void Add(int x, int y) Console.WriteLine("Add Method {0}", x + y); public void Subtract(int x, int y) Console.WriteLine("Subtract Method {0}", x - y); obj.MyEvent += new MyDelegate(obj.Add); obj.MyEvent += new MyDelegate(obj.Subtract); obj.RaiseEvent(20, 10); using System.ComponentModel; using System.Data; using System.Drawing; using System.Windows.Forms; namespace delegate_anonymous public partial class Form1 : Form public delegate int MyDelegate(int a, int b); EventHandler d1 = delegate(object sender, EventArgs e) MessageBox.Show("Anonymous Method"); }; MyDelegate d2= delegate (int a, int b) { return(a+b); public Form1() InitializeComponent(); private void button1_Click(object sender, EventArgs e) d1(sender, e); private void button2_Click(object sender, EventArgs e) int i = d2(10, 20); MessageBox.Show(i.ToString()); Hope the article would have helped you in understanding delegates and events. They require a serious study and practical understanding. Your feedback and constructive contributions are welcome. Please feel free to contact me for feedback or comments you may have about this article. Here are some more similar articles on delegates: Delegates in C# Delegates: Part I Delegates: Part 2 Delegate in .NET (C#): Digging / Acquaint - Make it Simple Exploring delegates in C# Concept of a Delegate in C# C# Delegates Use of Delegate and Lambda Expression Implementing Delegates in C# Delegates Illustrated Delegates and Events in C# ABC's of Delegate Simple and Multicast Delegates in C#.net A Strategy for Using Delegates in C# Look at Covariance and Contravariance in Delegates Working as a Software professional. ©2015 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/puranindia/C-Sharp-net-delegates-and-events/
CC-MAIN-2015-11
refinedweb
1,303
50.84
Built-in Django tags Django offers several built-in tags that offer immediate access to elaborate operations on Django templates. Unlike Django filters which operate on individual variables, tags are designed to produce results without a variable or operate across template sections. I'll classify each of these built-in tags into functional sections so it's easier to identify them. The functional classes I'll use are: Dates, forms, comparison operations, loops, Python & filter operations, spacing and special characters, template structures, development and testing & urls. Dates {% now %}.- The {% now %}tag offers access to the current system time. The {% now %}tag accepts a second argument to format the system date. For example, if the system date is 01/01/2015 for the statement {% now "F jS o" %}the tag output is January 1st 2015. The string syntax for the {% now %}tag is based on Django date characters described in table 3-3. It's also possible to use the askeyword to re-use the value through a variable(e.g. {% now "Y" as current_year %}and later in the template declare Tip The {% now %} tag can accept Django date variables: {% now "DATE_FORMAT" %}, {% now "DATETIME_FORMAT" %}, {% now "SHORT_DATE_FORMAT" %} or {% now "SHORT_DATETIME_FORMAT"}.The date variables in themselves are also composed of date strings. For example DATE_FORMAT default's to "N j, Y" (e.g. Jan 1, 2015), DATETIME_FORMAT defaults to "N j, Y, P" (e.g. Jan 1, 2015, 12 a.m.), SHORT_DATE_FORMAT defaults to "m/d/Y" (e.g. 01/01/2015) and SHORT_DATETIME_FORMAT defaults to "m/d/Y P" (e.g. 01/01/2015 12 a.m.). Each date variable can be overridden with different date strings in a project's settings.py file. Forms {% csrf_token %}.- The {% csrf_token %}tag provides a string to prevent cross site scripting. The {% csrf_token %}tag is only intended to be used inside HTML <form>tags. The data output of the {% csrf_token %}tag allows Django to prevent request forgeries (e.g. HTTP POST requests) from form data submissions. More details about the {% csrf_token %}tag are provided in the Django forms chapter. Comparison operations {% if %}with {% elif %} {% else %}.- The {% if %}tag is typically used in conjunction with the {% elif %}and {% else %}tags to evaluate more than one condition. An {% if %}tag with an argument variable evaluates to true if a variable exists and is not empty or if the variable holds a Trueboolean value. Listing 3-18 illustrates a series of {% if %}tag examples. Listing 3-18. Django {% if %} tag evaluate to true. A variable that just exists and is empty evaluates to false. {% if %}with and, orand notoperators.- The {% if %}tag %}tag %}). {% firstof %}.- The {% firstof %}tag is a shorthand tag to output the first variable in a set of variables that's not empty. The same functionality of the {% firstof %}tag is achieved by nesting {% if %}tags. Listing 3-19 illustrates a sample of the { % firstof %}tag, as well as an equivalent set of nested {% if %}tags. Listing 3-19. Django {% firstof %} tag and equivalent {% if %}{% elif %}{% else %} tags # Firstof example {% firstof var1 var2 var3 %} # Equivalent of firstof example {% if var1 %} {{var1|safe}} {% elif var2 %} {{var2|safe}} {% elif var3 %} {{var3|safe}} {% endif %} # Firstof example with a default value in case of no match (i.e, all variables are empty) {% firstof var1 var2 var3 "All vars are empty" %} # Assign the firstof result to another variable {% firstof var1 var2 var3 as resultof %} # resultof now contains result of firstof statement {% if <value> in %}and {% if <value> not in %}.- The {% if %}tag. Although %}). {% if <value> is <value> %}and {% if <value> is not %}.- The {% if %}tag also supports the isand is notoperators to make object-level comparisons. For example {% if target_drink is None %}tests if the value target_drinkis a Noneobject or {% if daily_special is not True %}tests if the value daily_specialis not True. {% if value|<filter> <condition> <value> %}.- The {% if %}tag also supports applying filters directly on a value and then performing an evaluation. For example, {% if target_drink_list|random == user_drink %}Congratulations your drink just got selected!{% endif %}uses the randomfilter directly in a condition. Comparison operators are often aggregated into single statements (e.g. if...<...or...>...and...==...) and follow a certain execution precedence. Django follows the same operator precedence as Python[5]. So for example, the statement {% if drink in specials or drink == drink_of_the_day %} gets evaluated as ((drink in specials) or (drink == drink_of_the_day)) , where the internal parenthesis operations are run first, since in and == have higher precedence than or. In Python you can alter this precedence by using explicit parenthesis in comparison statements. However, Django does not support the use of parenthesis in {% if %} tags, you must either rely on operator precedence or use nested {% if %} statements to declare the same logic produced by explicit parenthesis. Loops {% for %}and {% for %}with {% empty %}.- The {% for %}tag iterates over items on a dictionary, list, tuple or string variable. The {% for %}tag syntax is {% for <reference> in <variable> %}, where referenceis assigned a new value from variable on each iteration. Depending on the nature of a variable there can be one or more references (e.g. for a list one reference {% for item in list %}, for a dictionary two references {% for key,value in dict.items %}).In addition, it's also possible to invert the loop sequence with the reversedkeyword (e.g. { % for item in list reversed %}).The {% for %}tag also supports the {% empty %}tag which is processed in case there are no iterations in a loop (i.e. the main variable is empty). Listing 3-20 illustrates a {% for %}and a {% for %}and {% empty %}loop example. Listing 3-20 Django {% for %} tag and {% for %} with {% empty %} <ul> <ul> {% for drink in drinks %} {% for storeid,store in stores %} <li>{{ drink.name }}</li> <li><a href="/stores{{storeid}}/">{{store.name}}</a></li> {% empty %} {% endfor %} <li>No drinks, sorry</li> </ul> {% endfor %} </ul> The {% for %} tag also generates a series of variables to manage the iteration process, such as an iteration counter, a first iteration flag and a last iteration flag. These variables can be useful when you want to create behaviors (e.g. formatting, additional processing) on a given iteration. Table 3-4 illustrates the {% for %} tag variables. Table 3-4 Django {% for %} tag variables {% ifchanged %}.- The {% ifchanged %}tag is a special logical tag used inside {% for %}tags. Sometimes it's helpful to know if a loop reference has changed from one iteration to the other (e.g. to insert a new title). The argument for the {% ifchanged %}tag is the loop reference itself (e.g. {% ifchanged drink %}{{drink}} section{% endifchanged %}) or a part of the reference (e.g. {% ifchanged store.name %}Available in {{store.name}}{% endifchanged %}). The {% ifchanged %}tag also support the use of {% else %}tag (e.g. {% ifchanged drink %}{{drink.name}}{% else %}Same old {{drink.name}} as before{% endifchanged %}). {% cycle %}.- The {% cycle %}tag is used inside {% for %}tags to iterate over a given set of strings or variables. One of the primary uses of the {% cycle %}tag is to define CSS classes so each iteration receives a different CSS class. For example, if you want assign different CSS classes to a list so each line appears in different colors (e.g. white, grey, white, grey) you can use <li class="{% cycle 'white' 'grey' %}">, in this manner on each loop iteration the class value alternates between white and grey. The {% cycle %}tag can iterate sequentially over any number of strings or variables (e.g. {% cycle var1 var2 'red' %}). By default, a {% cycle %}tag progresses through its values on the basis of its enclosing loop (i.e. one by one). But under certain circumstances, you may need to use a {% cycle %}tag outside of a loop or explicitly declare how a {% cycle %} tag advances. You can achieve this behavior by naming the {% cycle %}tag with the askeyword, as illustrated in listing 3-21. Listing 3-21 Django {% cycle %} with explicit control of progression <li class="{% cycle 'disc' 'circle' 'square' as bullettype %}">...</li> <li class="{{bullettype}}">...</li> <li class="{{bullettype}}">...</li> <li class="{% cycle bullettype %}">...</li> <li class="{{bullettype}}">...</li> <li class="{% cycle bullettype %}">...</li> # Outputs <li class="disc">...</li> <li class="disc">...</li> <li class="disc">...</li> <li class="circle">...</li> <li class="circle">...</li> <li class="square">...</li> As you can see in listing 3-21, the {% cycle %} tag statement initially produces the first value and afterwards you can continue using the cycle reference name to output the same value. In order to advance to the next value in the cycle, you call the {% cycle %} once more with the cycle reference name. A minor side-effect of the {% cycle %} tag is that it outputs its initial value where it's declared, something that can be problematic if you plan to use the cycle as a placeholder or in nested loops. To circumvent this side-effect, you can use the silent keyword after the cycle reference name (e.g.{ % cycle 'disc' 'circle' 'square' as bullettype silent %}). {% resetcycle %}.- The {% resetcycle %}tag is used is to re-initiate a {% cycle %}tag to its first element. A {% cycle %}tag always loops over its entire set of values before returning to its first one, something that can be problematic in the context of nested loops. For example, if you want to assign three color codes (e.g. {% cycle 'red' 'orange' 'yellow' %}) to nested groups, the first group can consist of two elements that use up the first two cycle values (e.g. 'red' 'orange'), which means the second group starts on the third color code (e.g.'yellow'). In order for the second group to start with the first {% cycle %}element again, you can use the {% resetcycle %}tag after a nested loop iteration finishes so the {% cycle %}tag returns to its first element. {% regroup %}.- The {% regroup %}tag is used to rearrange the contents of a dictionary variable into different groups. The {% regroup %}tag avoids the need to create complex conditions inside a {% for %}tag to achieve the desired display. The {% regroup %}tag arranges the contents of a dictionary beforehand, making the {% for %}tag logic simpler. Listing 3-22 illustrates a dictionary with the use of the {% regroup %}tag along with its output. Listing 3-22 Django {% for %} tag and {% regroup %} # Dictionary definition stores = [ {'name': 'Downtown', 'street': '385 Main Street', 'city': 'San Diego'}, {'name': 'Uptown', 'street': '231 Highland Avenue', 'city': 'San Diego'}, {'name': 'Midtown', 'street': '85 Balboa Street', 'city': 'San Diego'}, {'name': 'Downtown', 'street': '639 Spring Street', 'city': 'Los Angeles'}, {'name': 'Midtown', 'street': '1407 Broadway Street', 'city': 'Los Angeles'}, {'name': 'Downton', 'street': '50 1st Street', 'city': 'San Francisco'}, ] # Template definition with regroup and for tags {% regroup stores by city as city_list %} <ul> {% for city in city_list %} <li>{{ city.grouper }} <ul> {% for item in city.list %} <li>{{ item.name }}: {{ item.street }}</li> {% endfor %} </ul> </li> {% endfor %} </ul> # Output San Diego Downtown : 385 Main Street Uptown : 231 Highland Avenue Midtown : 85 Balboa Street Los Angeles Downtown: 639 Spring Street Midtown: 1407 Broadway Street San Francisco Downtown: 50 1st Street Tip The {% regroup %} tag can also use filters or properties to achieve grouping results. For example, the stores list in 3-22 is conveniently pre-ordered by city making grouping by city automatic, but if the stores list were not pre-ordered, you would need to sort the list by city first to avoid fragmented groups, you can use a dictsort filter directly (e.g.{% regroup stores|dictsort:'city' by city as city_list %}). Another possibility of the {% regroup %} tag is to use nested properties if the grouping object has them (e.g. if city had a state property, {% regroup stores by city.state as state_list %}). Python & filter operations {% filter %}.- The {% filter %}tag is used to apply Django filters to template sections. If you declare {% filter lower %}the lowerfilter is applied to all variables between this tag and the {% endfilter %}tag -- note the filter lowerconverts all content to lowercase. It's also possible to apply multiple filters to the same section using the same pipe technique to chain filters to variables (e.g. {% filter lower|center:"50" %}...variables to convert to lower case and center...{% endfilter %}). {% with %}.- The {% with %}tag lets you define variables in the context of Django templates. It's useful when you need to create variables for values that aren't exposed by a Django view method or when a variable is tied to a heavyweight operation. It's also possible to define multiple variables in the same {% with %}tag (e.g. {% with drinkwithtax=drink.cost*1.07 drinkpromo=drink.cost*0.85 %}). Each variable defined in a {% with %}tag is made available to the template until the {% endwith %}tag is reached. Django templates don't allow the inclusion of inline Python logic. In fact, the closest thing Django templates allow to inline Python logic is through the {% with %} tag which isn't very sophisticated. The only way to make custom Python logic work in Django templates is to embed the code inside a custom Django tag or filter. This way you can place a custom Django tag or filter on a template and the Python logic runs behind the scenes. The next section describes how to create custom Django filters. Spacing and special characters {% autoescape %}.- The {% autoescape %}tag is used to escape HTML characters from a template section. The {% autoescape %}accepts one of two arguments onor off. With {% autoescape on %}all template content between this tag and the {% endautoescape %}tag is HTML escaped & with {% autoescape off %}all template content between this tag and the {% endautoescape %}tag is not escaped. Tip If you want to enable or disable auto-escaping globally (i.e. on all templates), it's easier to disable it at the project level using the autoescape field in the OPTIONS variable in the TEMPLATES configuration, inside a project's settings.py file, as described in first section of this chapter.If you want to enable or disable auto-escaping on individual variables, you can either use the safe filter to disable auto-escaping on a single Django template variable or the escape filter to escape a single Django template variable. {% spaceless %}.- The {% spaceless %}tag removes whitespace between HTML tags, including tab characters and newlines. Therefore all HTML content contained within the {% spaceless %}and {% endspaceless %}becomes more compact. Note the {% spaceless %}tag only removes space between HTML tags, it does not remove space between text and HTML tags (e.g. <p> <span> my span </span> </p>, only the space between <p> <span>and </span> </p>tags is removed, the space between <span>tags that pads the myspanstring remains). {% templatetag %}.- The {% templatetag %}tag is used to output reserved Django template characters. So if by any chance you want to display any of the characters {%, %}, {{, }}, {, }, {#or #}verbatim on a template you can. The {% templatetag %}is used in conjunction with one of eight arguments to represent Django template characters. {% templatetag openblock %}outputs {%, {% templatetag closeblock %}outputs %}, {% templatetag openvariable %}outputs {{, {% templatetag closevariable %}outputs }}, {% templatetag openbrace %}outputs {, {% templatetag closebrace %}outputs }, {% templatetag opencomment %}outputs {#and {% templatetag closecomment %}outputs #}. A simpler approach is to wrap reserved Django characters with the {% verabtim %}tag. {% verbatim %}.- The {% verbatim %}tag is used to isolate template content from being processed. Any content inside the {% verbatim %}tag and {% endverbatim %}tag is bypassed by Django. This means special characters like {{, variable statements like {{drink}}or JavaScript logic that uses special Django characters is ignored and rendered verbatim. If you need to output individual special characters use the {% templatetag %}tag. {% widthratio %}.- The {% widthratio %}tag is used to calculate the ratio of a value to a maximum value. The {% widthratio %}tag is helpful for displaying content that is fixed in width but requires to be scaled based on the amount of available space, such as the case with images and charts. For example, given the statement <img src="logo.gif" style="width:{% widthratio available_width image_width 100 %}%"/>, if the available_widthis 75 and image_widthis 150 it results in 0.50 multiplied by 100 which results in 50. This image's width ratio is calculated based on the available space and image size, in this case the statement is rendered as: <img src="logo.gif" style="width:50%"/>. {% lorem %}.- The {% lorem %}tag is used to display random latin text, which is useful for filler on templates. The {% lorem %}tag supports up to three parameters {% lorem [count] [method] [random] %}. Where [count]is a number or variable with the number of paragraphs or words to generate, if not provided the default [count]is 1. Where [method]is either wfor words, pfor HTML paragraphs or bfor plain-text paragraph blocks, if not provided the default [method]is b. And where the word random(if given) outputs random Latin words, instead of a common pattern (e.g. Lorem ipsum dolor sit amet...). Template structures {% block %}.- The {% block %}tag is used to define page sections that can be overridden on different Django templates. See the previous section in this chapter on how to create reusable templates for examples of this tag. {% comment "Optional explanation" %}.- The {% comment %}tag is used to define comment sections on Django templates. Any content placed between the {% comment %}and {% endcomment %}tag is bypassed by Django and doesn't appear in the final rendered web page. Note the string argument in the opening {% comment %}tag is optional, but helps clear up the purpose of the comment. {# #}.- The {#}syntax can be used for a single line comment on Django templates. Any content placed between {#and #}in a single line is bypassed by Django and doesn't appear in the final rendered web page. Note that if the comment spans multiple lines you should use the {% comment %}tag. {% extends %}.- The {% extends %}tag is used to reuse the layout of another Django template. See the previous section in this chapter on creating reusable templates for examples of this tag. {% include %}.- The {% include %}tag is used to embed a Django template on another Django template. See the previous section in this chapter on creating reusable templates for examples of this tag. {% load %}.- The {% load %}tag is used to load custom Django tags and filters. The {% load %}tag requires one or multiple arguments to be the names of the custom Django tags or filters. The next section of this chapter describes how to create custom filters and how to use the {% load %}tag. Tip If you find yourself using the {% load %} tag on many templates, you may find it easier to register Django tags and filters with the builtins option in TEMPLATES so they become accessible on all templates as if they where built-in. See the first section in this chapter on template configuration for more details. Development and testing {% debug %}.- The {% debug %}tag outputs debugging information that includes template variables and imported modules. The {% debug %}tag is useful during development and testing because it outputs 'behind the scenes' information used by Django templates. Urls {% url %}.- The {% url %}tag is used to build urls from predefined values in a project's urls.pyfile. The {% url %}tag is useful because it avoids the need to hardcode urls on templates, instead it inserts urls based on names. The {% url %}tag accepts a url name as its first argument and url parameters as subsequent arguments. For example, if a url points to /drinks/index/and is named drinks_main, you can use the {% url %}to reference this url (e.g. <a href="{% url drinks_main %}"> Go to drinks home page </a>); if a url points to /stores/1/and is named stores_detailyou can use the {% url %}with an argument to reference this url (e.g. <a href="{% url stores_detail store.id %}"> Go to {{store.name}} page </a>). The {% url %}tag also supports the askeyword to define the result as a variable. This allows the result to be used multiple times or at a point other than where the {% url %}tag is declared (e.g. {% url drink_detail drink.name as drink_on_the_day%}...later in the template <a href="{{drink_of_the_day}}> Drink of the day </a>). Chapter 2 describes this process to name Django url's for easier management and reverse matches in greater detail. ↑
https://www.webforefront.com/django/usebuiltindjangotags.html
CC-MAIN-2021-31
refinedweb
3,318
64.51
This tutorial deals with audio extraction from video using GPU accelerated libraries supported by FFMPEG in Ubuntu. The full code is available in this GitHub repository. For similar posts about video processing, please refer to Resizing a video is unbelievably fast by GPU acceleration and GPU-based video rotation FFmpeg. Introduction FFmpeg is one of the most famous multimedia frameworks which is widely used for processing videos. In order to encode the video, certainly, a video encoder must be used. The NVIDIA GPUs NVENC In this tutorial, the main goal is to show how to extract audio from video with GPU-accelerated libraries in Linux. In this tutorial, we do not use the terminal commands directly for employing the FFmpeg with NVENC support. Instead, the python interface is being used to run commands in the terminal. This can be done subprocess os.sys The assumption of this tutorial is that the FFmpeg is already installed with NVENC support. The installation guide can be found in FFMPEG WITH NVIDIA ACCELERATION ON UBUNTU LINUXdocumentation provided by NVIDIA. Audio Extraction from Video From now on the assumption is that the .txt file is ready and well-formatted. The python script for processing videos is as below: import subprocess import os import sys # Pre... textfile_path = 'absolute/path/to/videos.txt' # Read the text file with open(textfile_path) as f: content = f.readlines() # you may also want to remove whitespace characters like `\n` at the end of each line files_list = [x.strip() for x in content] # Extract audio from video. # It already save the video file using the named defined by output_name. for file_num, file_path_input in enumerate(files_list, start=1): # Get the file name withoutextension file_name = os.path.basename(file_path_input) if 'mouthcropped' not in file_name: raw_file_name = os.path.basename(file_name).split('.')[0] file_dir = os.path.dirname(file_path_input) file_path_output = file_dir + '/' + raw_file_name + '.wav' print('processing file: %s' % file_path_input) subprocess.call( ['ffmpeg', '-i', file_path_input, '-codec:a', 'pcm_s16le', '-ac', '1', file_path_output]) print('file %s saved' % file_path_output) Overall Code Description videos.txt .txt files_list subprocess.call , subprocess.call an empty space for i in **/*.mp4; do base= base.mp4 -codec:a pcm_s16le -ac 1base.mp4 -codec:a pcm_s16le -ac 1 {i%.mp4}; ffmpeg -i{i%.mp4}; ffmpeg -i base.wav; donebase.wav; done As a consideration, if we are working on any specific virtual environment it has to be activated at first. Summary This tutorial demonstrated how to extract audio from a video and specifically using FFmpeg and Nvidia GPU accelerated library called NVENC. The advantage of using the Python interface is to easily parse the .txtfile and looping through all files. Moreover, it enables the user with options which are more complex to be directly employed in the terminal environment.
https://www.machinelearningmindset.com/extracting-audio-from-video-using-ffmpeg/
CC-MAIN-2020-10
refinedweb
451
51.44
Question : Sometimes it is useful to “clone” a row or column vector to a matrix. By cloning I mean converting a row vector such as [1, 2, 3] Into a matrix [[1, 2, 3], [1, 2, 3], [1, 2, 3]] or a column vector such as [[1], [2], [3]] into [[1, 1, 1] [2, 2, 2] [3, 3, 3]] In MATLAB or octave this is done pretty easily: x = [1, 2, 3] a = ones(3, 1) * x a = 1 2 3 1 2 3 1 2 3 b = (x') * ones(1, 3) b = 1 1 1 2 2 2 3 3 3 I want to repeat this in numpy, but unsuccessfully In [14]: x = array([1, 2, 3]) In [14]: ones((3, 1)) * x Out[14]: array([[ 1., 2., 3.], [ 1., 2., 3.], [ 1., 2., 3.]]) # so far so good In [16]: x.transpose() * ones((1, 3)) Out[16]: array([[ 1., 2., 3.]]) # DAMN # I end up with In [17]: (ones((3, 1)) * x).transpose() Out[17]: array([[ 1., 1., 1.], [ 2., 2., 2.], [ 3., 3., 3.]]) Why wasn’t the first method ( In [16]) working? Is there a way to achieve this task in python in a more elegant way? Answer #1: Here’s an elegant, Pythonic way to do it: 1,2,3],]*3) array([[1, 2, 3], [1, 2, 3], [1, 2, 3]]) array([[1,2,3],]*3).transpose() array([[1, 1, 1], [2, 2, 2], [3, 3, 3]])array([[ the problem with [16] seems to be that the transpose has no effect for an array. you’re probably wanting a matrix instead: 1,2,3]) x array([1, 2, 3]) x.transpose() array([1, 2, 3]) matrix([1,2,3]) matrix([[1, 2, 3]]) matrix([1,2,3]).transpose() matrix([[1], [2], [3]])x = array([ Answer #2: Use numpy.tile: 1,2,3]), (3, 1)) array([[1, 2, 3], [1, 2, 3], [1, 2, 3]])tile(array([ or for repeating columns: 1,2,3]]).transpose(), (1, 3)) array([[1, 1, 1], [2, 2, 2], [3, 3, 3]])tile(array([[ Answer #3: First note that with numpy’s broadcasting operations it’s usually not necessary to duplicate rows and columns. See this and this for descriptions. But to do this, repeat and newaxis are probably the best way In [12]: x = array([1,2,3]) In [13]: repeat(x[:,newaxis], 3, 1) Out[13]: array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]) In [14]: repeat(x[newaxis,:], 3, 0) Out[14]: array([[1, 2, 3], [1, 2, 3], [1, 2, 3]]) This example is for a row vector, but applying this to a column vector is hopefully obvious. repeat seems to spell this well, but you can also do it via multiplication as in your example In [15]: x = array([[1, 2, 3]]) # note the double brackets In [16]: (ones((3,1))*x).transpose() Out[16]: array([[ 1., 1., 1.], [ 2., 2., 2.], [ 3., 3., 3.]]) Answer #4: Let: 1000 x = np.arange(n) reps = 10000n = Zero-cost allocations A view does not take any additional memory. Thus, these declarations are instantaneous: # New axis x[np.newaxis, ...] # Broadcast to specific shape np.broadcast_to(x, (reps, n)) Forced allocation If you want force the contents to reside in memory: 10.2 ms ± 62.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit np.repeat(x[np.newaxis, :], reps, axis=0) 9.88 ms ± 52.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit np.tile(x, (reps, 1)) 9.97 ms ± 77.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)%timeit np.array(np.broadcast_to(x, (reps, n))) All three methods are roughly the same speed. Computation 1)) %timeit np.broadcast_to(x, (reps, n)) * a 17.1 ms ± 284 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit x[np.newaxis, :] * a 17.5 ms ± 300 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit x_tiled * a 17.6 ms ± 240 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)a = np.arange(reps * n).reshape(reps, n) x_tiled = np.tile(x, (reps, All three methods are roughly the same speed. Conclusion If you want to replicate before a computation, consider using one of the “zero-cost allocation” methods. You won’t suffer the performance penalty of “forced allocation”. Answer #5: I think using the broadcast in numpy is the best, and faster I did a compare as following import numpy as np b = np.random.randn(1000) In [105]: %timeit c = np.tile(b[:, newaxis], (1,100)) 1000 loops, best of 3: 354 µs per loop In [106]: %timeit c = np.repeat(b[:, newaxis], 100, axis=1) 1000 loops, best of 3: 347 µs per loop In [107]: %timeit c = np.array([b,]*100).transpose() 100 loops, best of 3: 5.56 ms per loop about 15 times faster using broadcast Answer #6: One clean solution is to use NumPy’s outer-product function with a vector of ones: np.outer(np.ones(n), x) gives n repeating rows. Switch the argument order to get repeating columns. To get an equal number of rows and columns you might do np.outer(np.ones_like(x), x) Answer #7: You can use np.tile(x,3).reshape((4,3)) tile will generate the reps of the vector and reshape will give it the shape you want Answer #8: If you have a pandas dataframe and want to preserve the dtypes, even the categoricals, this is a fast way to do it: import numpy as np import pandas as pd df = pd.DataFrame({1: [1, 2, 3], 2: [4, 5, 6]}) number_repeats = 50 new_df = df.reindex(np.tile(df.index, number_repeats))
https://discuss.dizzycoding.com/cloning-row-or-column-vectors/
CC-MAIN-2022-33
refinedweb
969
83.25
A commonly asked question via ADN support and on our public forums is how to read or write DWG files from a standalone executable without having to install AutoCAD on the same machine. This can be done by licensing the Autodesk RealDWG SDK. This SDK allows you to build DWG capability into your own application without having to install AutoCAD on the same machine and automate it from your executable. RealDWG is essentially the DatabaseServices part of the AutoCAD .NET API (or AcDb part of ObjectARX), along with supporting namespaces. RealDWG doesn’t include AutoCAD ‘editor’ APIs, and so you can’t easily use it for viewing and plotting DWG files (unless you do a lot of work implementing your own graphics/plotting engine). If your customer won’t buy AutoCAD for that, but they need viewing and plotting with the same fidelity that AutoCAD provides, then consider AutoCAD OEM. AutoCAD OEM is a customizable AutoCAD that you can ‘brand’ as your own application, and from which you can expose a subset of the full AutoCAD functionality, and also add your own additional functionality. AutoCAD LT and DWG TrueView are examples of Autodesk products built using AutoCAD OEM. Both RealDWG and AutoCAD OEM are licensed technologies. You can find out more from the Tech Soft 3D website. (Tech Soft 3D are our global distributor for RealDWG and AutoCAD OEM). Here’s a video on RealDWG programming basics, recorded by DevTech’s Adam Nagy.
http://adndevblog.typepad.com/autocad/2012/03/readingwriting-dwg-files-from-a-standalone-application.html
CC-MAIN-2016-07
refinedweb
243
53
Hey everyone, I'm quite new to Unity. I saw others posting the same issue, but I cannot relate their situation to mine: where does this code go wrong? The error is the following: Assets/Scripts/Aiming.cs(46,28): error CS0135: `xmou' conflicts with a declaration in a child block and is related to the very last line of the following code: public class Aiming : MonoBehaviour { float xmou; float zmou; void Update () { //1.calcolare posizione mouse ogni frame; 2.ruotare giocatore di x° rispetto al vettore (1,0,0) //1.trasforma posizione mouse da pixel ad angolo rispetto a vettore (1,0,0); 2.per ogni quadrante aggiungi o sottrai gradi //var pos = Camera.main.WorldToScreenPoint(transfor m.position); float hei = Screen.height; float wid = Screen.width; float xmouabs = Input.mousePosition.x; float zmouabs = Input.mousePosition.z; if (xmouabs > wid/2){ var xmou = xmouabs - wid/2; if (zmouabs > hei/2){ var zmou = zmouabs - hei/2;} else{ var zmou = zmouabs;} } else if (xmouabs < wid/2){ var xmou = xmouabs; if (zmouabs > hei/2){ var zmou = zmouabs - hei/2;} else{ var zmou = zmouabs;} } Debug.Log (xmou.ToString ()); Thanks to both! Answer by Paulo-Henrique025 · Oct 30, 2015 at 09:16 PM Take a closer look, you are declaring a variable named xmou at line 3: float xmou; and inside the Update() at line 18 you declare another variable with the same name var xmou. float xmou; Update() var xmou The first xmou has class scope and will exist everywhere inside your class, including inside any method scope, this is why you can't declare a new one. To get more information about this search for C# Scope . xmou Answer by Treyzania · Oct 31, 2015 at 12:40 PM You don't need to write var or anything in front of variables when you assign them, only when you declare them. var xmou = xmouabs - wid / 2; zmou = zmouabs - hei/2; And so. Check if it's the first time running the game? 2 Answers Multiple Cars not working 1 Answer Distribute terrain in zones 3 Answers Pressing E three times in order for it to work. 1 Answer C# - Simple Optimization / Variable Declaration 1 Answer
https://answers.unity.com/questions/1090457/a-conflicts-with-a-declaration-in-a-child-block.html?sort=oldest
CC-MAIN-2020-29
refinedweb
362
73.58
0 Hey guy. I'm a new beginner in C++. I'm have some problem with my new code I have just written. I try to draw a triangles with from the characters "*". But it isn't work like I want. Here is it. `#include <iostream> using namespace /* Draw a triangles */ int main() { int a, b, c, d, e, f; Cout<<"Enter a"; cin>>a; cout<<"Enter b"; cin>>b; c=(b/2)+1; {for (d=0, d<b,d+1) if (d=c) cout<<"*"; else cout<<" ";} endl; {for (e=0, e<(a-1), e++) {for (f=0,f<b,f++) if (f=(c-e)) cout<<"*"; else cout <<" ";} endl} return 0; ` The error in line 4 said "expected identifier before 'int'. And in line 6 and 7 it said ""Cout"(or "cin")wa not declaired in this scope. Please help me with it.
https://www.daniweb.com/programming/software-development/threads/474120/trouble-with-function
CC-MAIN-2016-50
refinedweb
143
92.12
So you figured out Where Flux Went Wrong and are shipping your app with Redux. How will you measure usage? Will you know how users are using it once it’s launched? What about user authentication? You’ll definitely want to track that. But how will you do it? Will you add Google Analytics integration? Perhaps you’re planning to rock your own dashboards with Keen.io . But what about heat maps, page read statistics, e-commerce integration, or the ability to watch video recordings of your users? Which tools will you choose? How many integrations will you need? How well will those integrations work with Redux? If you’re asking these questions it’s a good thing. But the analytics tooling space is vast. And with so many options, which will you choose? I’m going to let you in on a little tip: don’t choose any of them, choose as many as you can and start experimenting. In this post you will learn how to enhance your reducers to connect your app to hundreds of analytics tooling integrations with a single dependency, at little to no cost and with very minimal effort. Analytics is about learning Have you ever heard of Segment? If not, let me give you the skinny. Segment is an analytics aggregation service providing hundreds of analytics tooling integrations with the toggle of a switch. It’s reliable and easy to work with. So easy, in fact, I added it to my blog . Their pricing model is moderate, even at scale. And, best of all, you can try out dozens of popular integrations such as Google Analytics for free, without any development experience needed. Redux users integrating with Segment are in luck! Turns out the team over at Rangle have graciously open sourced a slick middleware library called segment-redux , making Redux integration with Segment so easy you could do it with a hand tied behind your back. Huge props to @bertrandk on his efforts maintaining it. Let’s install the middleware and start tracking, shall we? Installing the Redux middleware Unless you’re using a space-age abstraction like Redux Providers & Replicators , or aren’t even using React at all, you’re probably using something like React Router alongside react-router-redux to handle routing in your application. If so, you’ll be tickled to know that, once the middleware is installed, you’ll get application page view reporting out of the box. Install the middleware packagewith npm i -S redux-segment and follow along with the two-step install instructions to connect it with Redux. The middleware installation instructions currently assume you’re building a fat client JS app and will be using Analytics.js . But you could just as easily swap in analytics-node if you’re building a Node app instead. Using redux-thunk for async data flow? No problem, the middleware supports that too. Connecting with Segment If you’ve already signed up with an account and create a Project for your app in Segment all you need to do is _.plop the WRITE_KEY provided into your app while initializing, and let the Redux middleware do the rest. The key is safe to share, so you don’t need to worry aboutlocking it up. Sending an event Once you’ve installed the redux-segment middleware and connected with Segment you can start tracking events immediately. As mentioned earlier, users of React Router Redux will enjoy out of the box support and will see Page events start flying the moment they nagivate between routes. Additional events can be set-up with just a few lines of code in your reducer. Here’s an example Identify event we use to track user login events on some of our React apps here at TA: import { EventTypes } from 'redux-segment' export const loginSuccess = (token, user) => ({ type: SESSION_LOGIN_SUCCESS, payload: { token, user, }, meta: { analytics: { eventType: EventTypes.identify, eventPayload: { userId: user.id, traits: (() => { const { email, firstName, imageUrl, lastName, phoneNumber } = user return { avatar: imageUrl, email, firstName, lastName, phone: phoneNumber, } })(), }, }, }, }) Notice how we’ve taken a simple reducer and added a meta property called analytics , along with an event type and payload. That’s all there is to it! See the middleware Usage section for different event types, supported routers and additional documentation. Track to your heart’s content Now that you have an arsenal of analytics tooling integrations at your fingertips you can ask your stakeholders what they want to measure, instead of asking them how they want to go about measuring it. I’ve used Segment in the past to set-up and scale several single-page apps tracked by Google Analytics, track application errors with Sentry and watch users navigate a site to discover UX issues. Currently I’m working on an AWS client-side monitoring solution leveraging Lambda and webhooks. And I get geeked every time someone shows me some new, so please don’t hesitate to gush about your favorite analytics tools in the comments » App Analytics with Redux 评论 抢沙发
http://www.shellsec.com/news/5579.html
CC-MAIN-2017-30
refinedweb
839
63.8
Smart server jail breaks bzr+http with shared repos Bug Description The hpss jail improvements in revision 4194 of bzr.dev broke bzr+http operations against branches in shared repos: $ bzr branch http:// bzr: ERROR: Server sent an unexpected error: ('error', "jail break: 'chroot- Standalone branches still work fine: $ bzr branch http:// Branched 161 revision(s). The server's log also gets spammed with tracebacks, e.g. "BzrError: jail break: 'chroot- Related branches - John A Meinel: Approve on 2009-10-21 - Diff: 236 lines6 files modifiedNEWS (+3/-0) bzrlib/smart/branch.py (+2/-2) bzrlib/smart/protocol.py (+7/-4) bzrlib/smart/request.py (+11/-4) bzrlib/tests/test_wsgi.py (+55/-7) bzrlib/transport/http/wsgi.py (+2/-1) I confirm that upgrading to bzr 1.14 makes it impossible to use an existing bzr+http server. So it's kind of a "big" bug. Workaround : downgrading to 1.13 It seems like the chroot information should be based on the configuration, and not the first URL passed. I don't have specific information about how it is working, but that might be an avenue to look at for debugging this. chroot information is dynamic based on the url and the root of the server; it may be the wsgi glue. If I was guessing, I would say the chroot is being set based on the '.bzr/smart' you are posting to. So if your layout is: repo/branch Then your initial POST goes to 'http:// At that point, we can see that we don't have a repository (or maybe we see the repository is 1 level up from a request, etc). Anyway, we've done things 2 different ways in the past. One is to post: 1) POST http:// 2) POST http:// The main difference is whether the "Transport. My guess is the currentcode is doing (1), and that the chroot is being based on the POST url, which means that the jail is causing us to fail. So at this point we could: 1) Change the client to go back to the old way, and have 'clone(..)' change the POST url. I think Andrew didn't like that form. I *think* because there was no guarantee that both URLs were valid to POST to, and we know we have something valid where we are at now.. John A Meinel wrote: [...] >. I think this option makes the most sense. It's consistent with how the TCP and inet (SSH) servers work. I just added an evil little monkeypatch to my bzr-smart script that changes the jail to your configured root (e.g. /srv/bzr/repo): from bzrlib.smart import request def setup_jail(self): # The backing transport is a chroot on /srv/bzr/repo transport = self._backing_ request. request. Monkeypatching is wrong, and might break, of course, but at least it works right now. I have a fix for this, finally. See the linked branch. Will add test(s) and submit for review, hopefully will be part of 2.1.0. With this plus a small bug fix for loggerhead, loggerhead's serve-branches script Just Works for me for serving a shared repo over bzr+http. For posterity, I'm attaching a better (mostly just longer) version of the monkeypatch. It verifies that the backing transport really is a chroot, so it should be a bit safer. Credit goes to spiv, BTW. (Not blame, though!) @Matt, please do a merge proposal instead of adding a patch to this bug, it's far easier to track, thanks in advance ! Vincent Ladeuil wrote: > @Matt, please do a merge proposal instead of adding a patch to this bug, > it's far easier to track, thanks in advance ! This bug has long been fixed. The monkeypatch is just a workaround applications can use if they need to run with an older (< 2.1) bzr. @Matt, ha ok, thanks for the explanation :D Note: I just reverted the copy of bzr used by my bzr+http server to bzr.mattnordhof f.com/ no longer exhibits this bug. r4193, so http://
https://bugs.launchpad.net/bzr/+bug/348308
CC-MAIN-2021-10
refinedweb
678
75.1
Brave new container world – an overview 01/07/20 by Bertram Vogel No Comments The container ecosystem changes virtualization and the way we work as software developers by providing us with new possibilities to build, distribute and run software. Docker and Kubernetes are surely the best-known technologies in this field. But a lot of other (new) tools and methods offer promising approaches to the subject as well. By now, it has become hard to keep an overview on what is out there and what serves which purpose.ContainerConf 2019: Getting an overview on container technologiesTherefore, in mid-November, I attended the ContainerConf in Mannheim, Germany – a 4-day conference about containers, Docker, Kubernetes, and everything related to that. I learned about a lot of new and interesting concepts, tools and methodologies. In this blog post, I want to share some of my impressions with you and hopefully give you a helpful overview on the recent state of the art when it comes to container technologies.The first thing I noticed is that the ContainerConf has a heavily interlinked sister conference: Continuous Lifecycle. It takes place at the same time and at the same location. As the name suggests, this conference is all about Build Management, DevOps, Continuous Integration, Continuous Delivery, and so forth. So the topics of both conferences definitely share a common ground. A ticket is valid for both conferences, so one more reason to attend!Workshops: Kubernetes and MicroservicesThe first and the last day of the conference are workshop days. I attended one on each day and both were very helpful. “Introduction to Kubernetes” from Erkan Yanar was a pleasent hands-on workshop. Together with Erkan, we explored the basics of Kubernetes (Pods, Deployments, ConfigMaps, Ingress, etc.). We also touched some more advanced topics like Helm and Prometheus at the end. The only downside of the workshop was, that it ended when it got really interesting.“Microservices – Architecture and Implementation with Kubernetes and Istio” was quite different because it was not a hands-on workshop. I was a bit skeptical about the format at first. I feared it would be a whole day of basically only listening to a talk. But Eberhard Wolff is a skillful teacher and the workshop was also interactive. Instead of fiddling with technical details on each participant’s laptop, we spent more time with discussions about the design of our example project. The workshop was absolutely worthwhile, and you could feel that Eberhard has a vast amount of experience.Besides the workshops, a conference also consists of numerous exciting presentations. I attended 12 talks with a wide variety of topics. I will only give you a more detailed summary about some of them and a condensed recap about the other ones.Keynote “Trajectory of Chaos Engineering”What a talk! I really enjoyed it. Casey Rosenthal presented the basic idea (and need!) of Chaos Engineering and spiced it up with some insightful and funny anecdotes. “Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production.” (). Especially in distributed microservice architectures, it is hard to test (or even understand) the system behavior as a whole. Chaos Engineering is one approach to tackle the complexity of such systems. One of the key points is that it is proactive.Casey proposed that Chaos Engineering is no longer the new kid on the block. It is already accepted and used in many companies. In his opinion, we will see a new trend soon: After CI (Continuous Integration) and CD (Continuous Deployment) we will have CV (Continuous Verification). In my understanding, you can think of Continuous Verification as the automated repetition of chaos experiments – neat idea!“Service Mesh – What the new infrastructure can do for Microservices” (Orig: “Service Mesh – Was die neue Infrastruktur für Microservices taugt”)Hanna Prinz gave a very informative talk about service meshes. She started by presenting one big problem with distributed microservice architectures: Cross-cutting concerns like timeouts, routing, authorization, encryption, etc. You will need to implement/handle these in each of your microservices. This can be painful, especially if you use different technologies and programming languages in your microservices. The idea of a service mesh is to provide these features out-of-the-box for all your microservices. She then went on to explain in more detail how a service mesh can help you with observability, routing, resilience, and security.For the demonstration part Hanna Prinz chose Istio, currently the most well-known service mesh. But in her opinion Linkerd is often good enough and much more easy to install and handle than Istio. She would only recommend Istio if its flexibility and complexity is truly needed. Hanna and her colleagues built a website that gives a quick overview about service meshes. This website also includes a comparison of the different implementations. Definitely have a look: servicemesh.es“Infrastructure-as-Code with Pulumi: Better than all the others (like Ansible)?”In this talk my codecentric colleague Jonas Hecht presented Pulumi, a new tool for Infrastructure-as-Code (cloud provisioning, configuration management, application deployment, …). After a general introduction of Pulumi and some live demos, he also did a thorough comparison with Ansible. One of the main differences is that Pulumi is not declarative, but procedural. This, of course, allows a great deal of flexibility! With the disadvantage that this might lead to a very diverse set of infrastructure files.Nice Talk from my @codecentric colleague @jonashackt about infrastructure-as-code with Pulumi.Seems like Pulumi has some nice ideas, but is not really the holy grail of IaC it claims to be.#ConLi19 #HypeDrivenDevelopment pic.twitter.com/3SjOowc12Q— Bertram Vogel (@bertoverflow) November 13, 2019My impression was that Jonas is not yet convinced by Pulumi in its current state. It claims to be language-agnostic, but almost all examples are JavaScript at the moment. It claims to be multi-cloud, but this seems to be mainly AWS currently. Pulumi resources in its current form are basically just “Terraform Wrappers”. And the comparison against Ansible did also not yield a clear winner. But Jonas also admitted that a lot of these points stem from the fact that Pulumi is still very young. Thus, it also does not have a big community. Jonas definitely was impressed by how quickly he could get an AWS Fargate up-and-running by Pulumi Crosswalk for AWS!“Creating a fast Kubernetes Development Workflow”This talk from Bastian Hofman was not so much about a fancy new methodology or concept, but more about software craftsmanship: How to use Kubernetes not only for production but also for your development workflow (and in an efficient way). My main takeaways are:use Helm: Helm is the package manager for Kubernetes and makes it super easy to install applications in your Kubernetes clusterHelm charts provides a templating mechanism and thus can also be used to configure your Kubernetes application for production, deployment, different branches without copy-pasting all your yaml-fileshelpful Kubernetes-tooling:kubeval: validate Kubernetes configuration fileskubectx + kubens: easily switch between clusters and namespaceskubectl-debug: would have loved to already know about this during the workshop. kubectl-debug will run a new container in a running pod so you can debug the other containers in that pod. This works because all containers in a pod share the same pid- and network-namespace.helm lint: use this to validate your helm charts while editing themK9s: this was actually not part of the talk, instead my colleague Dennis Effing showed me this wonderful piece of software. K9s is a CLI that greatly improves the interaction with Kubernetes clusters. Just install it and see for yourself!Quick recap about the other talksThe following list summarizes what I personally found the most interesting in the remainder of the talks. It is not meant to be an exhaustive explanation, but just to give you some buzzwords or phrases to google if it sounds promising to you 😉“Infrastructure as Microservices – Alternatives to the monolith Kubernetes” (Orig: “Infrastructure as Microservices – Alternativen zum Monolithen Kubernetes”) (Nils Bokermann, Sandra Parsick): Do you really need all of Kubernetes? It basically is a whole datacenter and it is complex. Why not slowly only introduce the parts that you currently require? Central configuration management? Consul. Reverse Proxy and Load Balancing? Traefik. You get the idea 🙂 “Integration Tests with Containers” (Philipp Krenn): You do not test the “real” datastore with Mocks and In-Memory-Databases. Embedded Databases (if available) and especially Testcontainers provide a much better alterative for integration tests.“More flow: essential Techniques for Continuous Delivery” (Orig: “Mehr Flow: Essenzielle Techniken für Continuous Delivery”) (Johannes Seitz): Trunk-based development (when done right) is more in the spirit of Continuous Integration than feature branches. Have a second build-pipeline, that builds and tests with all your dependencies from LATEST. This might help to prevent big-bang-migrations to a newer framework-version or similar.“Helm – The Better Way to Deploy on Kubernetes” (Reinhard Nägele): Helm is released in version 3. The Tiller is gone. There is a migration guide and a migration plugin.“The Importance of Fun in the Workplace” (Holly Cummins): If you look in a room and everyone is having fun, the team is productive and on time! Germans are easily impressed by jokes. Ducks make everything funnier.“100x Jenkins and not a bit tired” (Orig: “100 x Jenkins und kein bisschen müde”) (Frederic Gurr): Jiro is a cluster-based Jenkins infrastructure for projects hosted by the Eclipse Foundation. Jsonnet is a simple JSON-extension, that allows (among others) real comments in JSON-files!“Cloud Native Transformation Patterns” (Pini Reznik): Cloud Native is not only about the infrastructure and architecture, it is also about the teams and the culture. They identified and described some common patterns of a Cloud Native transformation.“Tools to Build Container Images” (Martin Höfling, Patrick Harböck): There are more tools to build images than just “docker build”. Docker build has security and scalability problems. For small teams they recommend docker (still a valid choice), Buildah or BuildKit (which is already available as an experimental feature in docker via docker buildx). For multiple teams with a provided Kubernetes-Infrastructure: Kaniko, Makisu. And for something completely different (no Dockerfile!): Bazel, Jib, Cloud Native Buildpacks.ConclusionI really enjoyed ContainerConf! Besides all the new input I got, the conference itself was very well organized. The venue was spacious and the catering was delicious (self-service sweets bar!). And it was also easy to get into some interesting conversations with other participants during the breaks. Thanks especially to Bastian, Jonas and Sandra for the amazing evening on Wednesday! Most importantly, the conference allowed me to take a deep dive into the container ecosystem. I now have a much better understanding how certain bits and pieces work together and what are promising technologies to look out for. I hope you could also learn something new and if you have any questions, please write a comment – I’m happy to answer them!
https://blog.codecentric.de/en/2020/01/brave-new-container-world/
CC-MAIN-2020-05
refinedweb
1,839
55.24
I just finished writing up an answer on SO to a users question: how do I implement cascading selects (where the options on each select is exclusive of what has already been selected)? Here is the solution I came up with: Let's take a closer look at what's going on. <div ng- var n = 3; $scope.resultArray = []; for (var i = 0; i < n; i++) {$scope.resultArray.push({})} Ok so first we start by preparing the array in which we would like our choices to populate into. I've populated it with empty objects such that the values will bind correctly from the child scopes (of the ng-repeats) and the parent scope (which is where the options logic is occuring). I use track by $index so I don't get a dupes error from Angular. <select ng-</select> $scope.carArray = function (index) { var cars = ['Renault', 'Holden', 'Ford','Dodge']; return cars.filter(function (el) { return ($scope.resultArray.map(function(e){return e.car}).slice(0,index).indexOf(el) === -1) }) } Given that every select element must show a different set of options I have implemented a function in the parent scope to accept the child scopes $index and return an array. Let's focus on the second select element (where index = 1): we look at the full array of cars and then filter it. We need to check if any of those car values exist in our resultsArray, but we're only interested in those array entries that are BEFORE the entry for our own index, which is why we use the slice method. If the car is not in that part of the resultArray then it's good to be selected at the given index :) Hope that helps anyone. Post a comment if you would like clarification or (even better) have a niftier solution ;)
https://coderwall.com/p/p-ztxq/cascading-selects-in-angularjs
CC-MAIN-2018-43
refinedweb
305
61.16
Using WPF with Managed C++ WEBINAR: On-demand webcast How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 REGISTER > 1. Introduction The purpose of this article is two folds. At the first half we discuss what WPF is. In addition we studied why and how to program WPF using Managed C++ and high level overview of WPF architecture. Latter we scratch the surface of Loan Amortization with one working example of Loan Amortization in WPF using C++. 2. What is WPF? Before going to study the WPF one might ask a question that what is WPF? WPF is abbreviation of "Window Presentation Foundation". It is a next generation presentation system for building window client application that can be run stand alone as well as in a Web Browser (i.e. XBAP Application). WPF is based on .Net environment, it means it is a managed code and theoretically can be written with any .Net based language such as Visual C#, VB.Net and Managed C++. WPF introduced with .Net 3.0 with few other important technologies such as Windows Communication Foundation (WCF) and Windows Workflow Foundation (WF), but here we are going to study only WPF. Most of the programmer thought that WPF is a feature of Visual C# and VB.Net can be done only in these languages. Although writing WPF programs in these languages are quite easy and fun, but it is not limited to only this. WPF is in fact a feature of .Net introduced with its version 3.5; therefore technically any .Net language can use it. If this is a case then why there are so many WPF samples written only in C# and VB.Net codes even in MSDN? The best answer might be because of XAML. When using C# or VB.Net then we can take full advantage of XAML, which is not available in VC++.Net. It means when you are trying to write WPF code in Managed C++, then you are on your own and have to write code for everything. It may be a daunting task but not impossible and in fact there are few samples available with Microsoft SDK such as PlotPanel, RadialPanel, CustomPanel etc. 2.1. Why Managed C++ for WPF? Next question is why should we use Managed C++ in Visual C++ to write WPF application when we can do the same thing in C# or VB.Net with XAML? There can be different reasons for it. - You lots of code base written in VC++ unmanaged code and it is not possible to rewrite everything in C#. You want to take advantage of both managed and unmanaged code in your project, such as using MFC document view architecture with rich user interface of WPF without creating any new DLL in C#. - Portion of your programs should be optimized for speed and for performance reason you write unmanaged code for it. WPF internally used the same technique for performance reason to call DirectX. - You want to hide the implementation of some portion of your program and or algorithm so no one can reverse engineer and write it as unmanaged code so no one can reverse engineer your code using ildasm. - Just for fun. 2.2. WPF Programming in VC++ To create simplest WPF program using Managed C++, you have to include reference of .Net components named windowsbase.dll, presentationcore.dll and presentationframeworkd.dll. In addition the program must be compiled using /clr switch because it is a managed code. Here is a diagram to show one project that has added references of these three DLL. To add the reference, right click on the project in the Solution Explorer tree and select "Reference..." from there. If we want to create a simple windows based program then it would be something like this. #include <windows.h> using namespace System::Windows; int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmd, int nCmd) { MessageBox::Show("Hello World"); } This program does nothing more than simply display one message box. We can further shorten the program by using main instead of WinMain and avoid including windows.h header file altogether, but in that case we will see the black console window behind the message box. If we want to make something more useful and interesting then we have to create objects of at least two classes Window and Application. But remember there can be only one object of Application class in the whole program. Here is the simplest program to show the usage of Window and Application class. #include <windows.h> using namespace System; using namespace System::Windows; [STAThread] int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmd, int nCmd) { Window^ win = gcnew Window(); win->Title = "Hello World"; Application^ app = gcnew Application(); app->Run(win); } The output of this program is a blank window with a title "Hello World". Here Application class is used to start the WPF application, manage the state of application and application level variables and information, but there is no output of it. It is Windows class that is responsible to draw window on the screen. Run method should be the last method call in the program, because this method won't return until the program close. Return value of Run method is application exit code return to the operating system. It is not necessary to pass the window object as a parameter in the run function of application class. We can call Run function without any parameter, but if we call the Run function without any parameter then we have to call the Show or ShowDilaog function of Window class before calling the Run. Difference between Show and ShowDialog is Show display the model dialog, on the other hand ShowDialog display the modeless dialog. For our simple application it doesn't make any difference. You can inherit your classes from Window and Application classes to store some application specific or window specific information. But remember you class must be inherited using the "ref" keyword and use "gcnew" keyword to create an instance of it on managed heap. Here is a simple program to show the usage of user inherited Window and Application classes. #include <windows.h> using namespace System; using namespace System::Windows; public ref class MyWindow : public Window { public: MyWindow() { Title = "Hello World"; } }; public ref class MyApplication : public Application { }; [STAThread] int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmd, int nCmd) { MyWindow^ win = gcnew MyWindow(); MyApplication^ app = gcnew MyApplication(); app->Run(win); } Output of this program is same as the previous one. In this program we can store all the application specific information in MyApplication class and Window specific information in MyWindow class. For example we set the title of the window in the constructor rather than in main after creating the object of it. We can also set other properties of window such as its back ground color, size, etc in the same place i.e. constructor. These classes and their usages looks quite familiar with MFC. In MFC based application we also need object of two classes named CWinApp and CWnd. Similarly we create only one object of CWinApp based class and call the run method of CWinApp. 2.3. WPF Class Hierarchy As we have seen before that to make smallest WPF application that display its own window we have to create objects of at least two classes named Window and Application. Before going further let's take a look at these two classes in little bit more detail. Here is a class diagram to show the inheritance chain for Application and Window class. There are no comments yet. Be the first to comment!
https://www.codeguru.com/cpp/cpp/cpp_managed/general/article.php/c16355/Using-WPF-with-Managed-C.htm
CC-MAIN-2017-47
refinedweb
1,275
63.39
Build and Releasing with VSTS for multiple AWS Serverless Stacks With any project it’s always good to be able to split up the build and the release stages of your application. This allows you to just keep propagating the same build artifacts across each environment so you can be confident that the code will work the same once it gets to production. Prerequisites For this example we will require the following to be setup: - Visual Studio 2017 (Community edition will work fine) - AWS Toolkit for Visual Studio 2017 - VSTS Account - AWS Tools for Microsoft Visual Studio Team Services Project To get started we will use one of the sample templates within the AWS Toolkit. Open up Visual studio and from the New Project menu select AWS Serverless Application (.NET Core), enter a name and location for your application and then click OK. For simplicity we will be using the Simple S3 Function template. This template will create us an AWS Lambda function and also a CloudFormation template that will create a S3 bucket and a Function that will trigger every time a object is created within the S3 bucket. Let’s start by modifying this so when an object is created in our S3 bucket we call a http endpoint with the bucket name, object key and the cloud formation stack name in which this function was created from: { "event": "ObjectCreated", "bucketName": "my-bucket-name", "objectKey": "my-object.png", "stackName": "my-cloud-formation-stack-name" } We’ll then need to modify the C# Lambda function to provide the required functionality. public class Function { public static HttpClient HttpClient { get; } = new HttpClient(); public static string StackName { get; } = Environment.GetEnvironmentVariable("STACK_NAME"); public static string HttpEndpoint { get; } = Environment.GetEnvironmentVariable("HTTP_ENDPOINT"); public async Task FunctionHandler(S3Event evnt, ILambdaContext context) { var s3Event = evnt.Records?[0].S3; if(s3Event == null) { return; } var json = JsonConvert.SerializeObject(new { @event = "ObjectCreated", bucketName = s3Event.Bucket.Name, objectKey = s3Event.Object.Key, stackName = StackName }); var responseMessage = await HttpClient.PostAsync(HttpEndpoint, new StringContent(json)) .ConfigureAwait(false); responseMessage.EnsureSuccessStatusCode(); context.Logger.LogLine($"Posted JSON '{json}' to {HttpEndpoint}"); } } The last step is to extend the CloudFormation template ( serverless.template) and where we need to pass in our new environment variables required by our C# function. { "AWSTemplateFormatVersion": "2010-09-09", "Transform": "AWS::Serverless-2016-10-31", "Description": "Template that creates a S3 bucket and a Lambda function that will be invoked when new objects are upload to the bucket.", "Parameters": { "BucketName": { "Type": "String", "Description": "Name of S3 bucket to be created. The Lambda function will be invoked when new objects are upload to the bucket. If left blank a name will be generated.", "MinLength": "0" }, "HttpEndpoint": { "Type": "String", "Description": "The Http endpoint to where to post data to when objects are created in the S3 bucket.", "MinLength": "1" } }, "Conditions": { "BucketNameGenerated": { "Fn::Equals": [ { "Ref": "BucketName" }, "" ] } }, "Resources": { "Bucket": { "Type": "AWS::S3::Bucket", "Properties": { "BucketName": { "Fn::If": [ "BucketNameGenerated", { "Ref": "AWS::NoValue" }, { "Ref": "BucketName" } ] } } }, "S3Function": { "Type": "AWS::Serverless::Function", "Properties": { "Handler": "MyCompany.MyServerlessApp::MyCompany.MyServerlessApp.Function::FunctionHandler", "Runtime": "dotnetcore2.1", "CodeUri": "", "Description": "Default function", "MemorySize": 256, "Timeout": 30, "Role": null, "Policies": [ "AWSLambdaFullAccess" ], "Events": { "NewImagesBucket": { "Type": "S3", "Properties": { "Bucket": { "Ref": "Bucket" }, "Events": [ "s3:ObjectCreated:*" ] } } }, "Environment": { "Variables": { "STACK_NAME": { "Ref": "AWS::StackName" }, "HTTP_ENDPOINT": { "Ref": "HttpEndpoint" } } } } } }, "Outputs": { "Bucket": { "Value": { "Ref": "Bucket" }, "Description": "Bucket that will invoke the lambda function when new objects are created." } } } Push to source control Now we have done all our alterations to our Serverless Application, let’s create a git repository, commit our changes and push it to a our hosted git solution of choice (we’ll be using VSTS). Invoke-WebRequest -Uri github/gitignore/master/VisualStudio.gitignore -OutFile .gitignore git init git add . git commit -m "MyServerlessApp" git remote add origin git push -u origin --all Notice we’re also downloading the VisualStudio.gitignore from GitHub to be used, this will exclude all the files and folders made by visual studio but not required. The Build Build Source Let’s go to the Build and Release section of VSTS, then we will create a new Build definition then select the source of the build, in our case it will be the git repository that we’ve just pushed to in the last section. When asked to select a template choose the Empty Process option as we will be creating our own build pipeline. Build Pipeline Our build pipeline will consist of 5 parts: - dotnet restore - dotnet build - dotnet lambda package - Copy serverless.template - Publish artifacts dotnet restore The dotnet restore and dotnet build steps will just be the default dotnet core steps with the command of restore and build selected retrospectively. dotnet lambda package Our dotnet lambda package will be a dotnet core step with a custom command of lambda, however, we will have to specify some additional arguments: package --output-package $(build.artifactstagingdirectory)/MyCompany.MyServerlessApp.zip This will tell the lambda cli to create us a lambda serverless package that can be deployed later down the line. Copy serverless.template Well also need to include the CloudFormation template ( serverless.template) in our deployment artifacts. This task copies serverless.template in to the VSTS artifact staging directory. Publish Artifact We’ll need to add a Publish Artifacts task with the standard defaults that will creates a artifact with the name of drop from the $(build.artifactstagingdirectory) path. Running the build Save and queue a build, once it has completed successfully you will notice that an artifact of the name of drop will be attached to the build. If you navigate in to this artifact you’ll see our built C# serverless application. Release Pipeline Now we have our build artifacts ready to deploy, we will need to create a Release pipeline to deploy to AWS. However, before we get started on our pipeline, we need to Install the AWS VSTS Extensions and Configure a AWS Service Endpoint. We will not cover the details of setting this up, but you can follow the Getting Started section of the AWS VSTS documentation which will guide you through the process. New Pipeline Now within the All release pipelines section of VSTS, We can create a new Release pipeline then from the Select a template screen, select Empty process (once again we will be building our own pipeline). This will now give us an empty canvas to work with. Artifacts To start with we need to tell VSTS where to pull artifacts to use within our release pipeline. Select the Add an artifact block and select the project where we build our artifacts within the last section. We will just leave the settings as the default as this will work perfectly for our scenario. Environments DevTest Environment Now let’s create our first environment, we’ll call this DevTest for the time being but feel free to name it corresponding to your stack that you will be creating. Our environment needs 2 tasks to deploy the serverless application: - Create Temp csproj - Deploy to Lambda Create temp project file We need to first create a temporary csproj file with a CLI Tool reference to Amazon.Lambda.Tools as the Lambda deploy task runs a dotnet restore and uses the Amazon.Lambda.Tools CLI internally. You can checkout this GitHub Issue for more information. We can create a file in multiple ways within VSTS but for simplicity we will use the File Creator VSTS Task. For the File path we will set it to $(System.DefaultWorkingDirectory)/_MyApplication-CI/drop/Tools.csproj. For the file content we’ll set it to the following: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netcoreapp2.1</TargetFramework> </PropertyGroup> <ItemGroup> <DotNetCliToolReference Include="Amazon.Lambda.Tools" Version="2.2.0" /> </ItemGroup> </Project> AWS Lambda .NET Core Deployment Next we will create a AWS Lambda deployment task that will fill-in the rest of the serverless.template and then push it to CloudFormation with our code to create us a new CloudFormation stack. Select the AWS Credentials you wish to use for this deployment, note that these will have to have the correct permissions within AWS to create all the given resources required by CloudFormation. Select the region where you want this CloudFormation stack deploying, we’ll be using EU (Ireland) [eu-west-1] We will also be deploying a Serverless Application, so select the Serverless Application from the Deployment Type. Give the stack a name, we’ll call ours DevTest-MyApp, we’ll also need a place to store the serverless templates, we’ve already created a S3 bucket of faedb8aa-86a3-4575-8e0e-c106cbbaee67. The tricky part is the additional lambda tools command line arguments, we need to pass in a package of the zip file within our deployment artifacts, template for the base template to use when deploying the CloudFormation template and also the template template-parameters. For our DevTest we will use the following values --package "$(System.DefaultWorkingDirectory)/_MyApplication-CI/drop/MyCompany.MyServerlessApp.zip" --template "$(System.DefaultWorkingDirectory)/_MyApplication-CI/drop/serverless.template" --template-parameters "HttpEndpoint=;BucketName=" Testing the Release Pipeline If we now go and create a release from the latest version of the artifacts this will automatically create our new template for our DevTest stack push it to S3. And eventually the CloudFormation will start to build our serverless application stack. Testing the DevTest Stack We can now test that our serverless application works as expected by dropping a file in to our newly created S3 bucket, which should posts a message to our endpoint that we gave CloudFormation when creating our stack (). If we navigate to the resources section of the CloudFormation stack, we will see a link to our new S3 bucket that was created. We can click the link and we will end up at the S3 bucket within the AWS Console, upload a file and watch the magic begin! Now if we flip back to RequestBin we will notice that we have had a post request hit our endpoint with all the information we would of expected. We can also check CloudWatch to see our log messages which are also automatically streamed from Lambda to CloudWatch. Production Environment Our DevTest environment now fully working, we need to setup our Production environment of the Serverless Application, we can simple do this by going back to our release pipeline and copying our current AWS DevTest environment. Once cloned, rename the environment to AWS Production. We will now have to alter our Stack Name within our AWS Lambda .NET Core Deployment task, this time we will call it Production-MyApp and now we are ready to roll our application to production! Note: We could of also change our CloudFormation template parameters such as HttpEndpointto point to another endpoint for production but for simplicity we’ll keep them the same. Create another release The only thing left to do now is create another release and allow CloudFormation to build our production environment for our serverless application. Now that everything is built you’ll notice that we now have identical stacks between DevTest and Production. We can go upload another file in to our newly create production s3 bucket, and this will trigger off our production dotnet core lambda function. Below you can also see the 2 lambda functions that were created for each stack by CloudFormation. Combined Power of VSTS and AWS CloudFormation As you can appreciate this gives us lots of power to allow VSTS to track our work items and also allows us to progress them in to each environment instead of the default way of just having a stack deployed on every build.
https://kevsoft.net/2018/08/13/build-and-releasing-with-vsts-for-multiple-aws-serverless-stacks.html
CC-MAIN-2020-24
refinedweb
1,918
51.18
Benoit Fouet wrote: > Michel Bardiaux wrote: > [snip] >> OK, -r works differently from -ab/-vb/-b *now*, but there is no >> logical reason why it should. A generic behavior is always good from a >> point of view of documentation and support (only half that much doc to >> know, etc). So we *should* work towards -ar/-vr (and it shouldnt >> matter that they are different internally). >> > well i need to know one thing: do you want to have, for instance, -b and > -vb to act in the same way ? Bottom line: -b has to act *as now*. -vb has to act reasonably, it needs not be 100% the same as -b, but its always better to apply the "principle of least astonishment", so yes, it would be better if -b and -vb were equivalent. > >> What I object to is that the same *value* (default or not) could apply >> to both. >> > on that i already said i don't really care whether i have to be precise > in the command line (i keep the bitrate example): > -vb for video, -ab for audio, -b for both... > i won't argue, as it depends on everybody's feelings :) OK, I think I see where the misunderstanding came in. If -b had not existed before, having -b for both would be a matter of taste. My point was that since -b *does* exist, its semantics should not be changed. > [snip] >> Well we have probably a dozen or so, most of them mission-critical, >> some of them on customer sites. So any major command-line >> compatibility breakage is a potential catastrophe. To guard against >> that, for the production environments I always use frozen versions >> rather than latest svn, but that's a band-aid at best: as soon as I >> need a fix, say in some codec, I have the devil's choice of either >> backporting the correction, or fix our scripts. >> > is it really a devil's choice ? A devil's choice means both sides of the choice are unpleasant. > won't you *always* choose the latter ? No, but it would be too long to explain why. > >> (There is a commercial solution, called the 48-hours day, but we cant >> afford it :-) ) >> > well, it's the same here, they're talking about going from north pole to > south one depending on season, and adapt work day to sun day :) > >>>> [snip] >>>> >>>>>> A radical proposal: keep all existing options as compatibility >>>>>> synonyms, and go for getopt_long (--vbitrate 200k) for a new set of >>>>>> options. >>>>>> >>>> What, no reaction? >>>> >>>> >>> well, no... i think it could be better to split options so that audio >>> ones are on one side, video on another one, subtitle one too, and so >>> on... this would some code duplication for AVOption, but well, maybe >>> it's finally the best thing to do... >>> >> We're talking about 2 different things here. My proposal was about >> keeping compatibility while implementing a totally new options scheme. >> Simple dashes would be 'old style' options, double dashes the new ones. >> > > why would we/you (i dont really know if i can tell "we", i'm a bit too > young here ;) ) want to manage both options ? > Because I want to keep the old ones, so to be creative one needs a new namespace: I propose the space of double-dash options. --
http://ffmpeg.org/pipermail/ffmpeg-devel/2007-March/032073.html
CC-MAIN-2018-09
refinedweb
549
68.6
Let's write a class to load a Wavefront OBJ file and render it with OpenGL. I have created a model of a futuristic tank (see Figure 11-5), which was built with AC3D () and exported it as mytank.obj and mytank.mtl. My artistic skills are limited; feel free to replace my model with your own 3D object. You can use any 3D modeler software that has the capability to export OBJ files. The class we will be building is called Model3D and will be responsible for reading the model and storing its geometry (vertices, texture coordinates, models, etc.). It will also be able to send the geometry to OpenGL to render the model. See Table 11-4 for the details of what Model3D needs to store. Table 11-4. Information Stored in the Model3D Class Name Purpose self.vertices A list of vertices (3D points), stored as tuples of three values (for x, y, and z) self.tex_coords A list of texture coordinates, stored as tuples of two values (for s and t) self.normals A list of normals, stored as tuples of three values (for x, y, and z) self.materials A dictionary of Material objects, so we can look up the texture file name given the name of the material self.face_groups A list of FaceGroup objects that will store the faces for each material self.display_list_id A display list id that we will use to speed up OpenGL rendering In addition to the Model3D class, we need to define a class to store materials and face groups, which are polygons that share the same material. Listing 11-3 is the beginning of the Model3D class. Listing 11-3. Class Definitions in model3d.py # A few imports we will need later from OpenGL.GL import * from OpenGL.GLU import * import pygame import os.path class Material(object): self.name = "" self.texture_fname = None self.texture_id = None class FaceGroup(object): self.tri_indices = [] self.material_name = "" class Model3D(object): # Display list id for quick rendering self.display_list_id = None Now that we have the basic class definitions, we can add a method to Model3D that will open an OBJ file and read the contents. Inside the read_obj method (see Listing 11-4), we go through each line of the file and parse it into a command string and a data list. A number of if statements decide what to do with the information store in data. Listing 11-4. Method to Parse OBJ Files def read_obj(self, fname): current_face_group = None file_in = file(fname) for line in file_in: # Parse command and data from each line words = line.split() command = words[0] data = words[1:] if command == 'mtllib': # Material library # Find the file name of the texture model_path = os.path.split(fname)[0] mtllib_path = os.path.join( model_path, data[0] ) self.read_mtllib(mtllib_path) elif command == 'v': # Vertex x, y, z = data vertex = (float(x), float(y), float(z)) self.vertices.append(vertex) elif command == 'vt': # Texture coordinate s, t = data tex_coord = (float(s), float(t)) self.tex_coords.append(tex_coord) elif command == 'vn': # Normal x, y, z = data normal = (float(x), float(y), float(z)) self.normals.append(normal) elif command == 'usemtl' : # Use material current_face_group = FaceGroup() current_face_group.material_name = data[0] self.face_groups.append( current_face_group ) assert len(data) == 3, "Sorry, only triangles are supported" # Parse indices from triples for word in data: vi, ti, ni = word.split('/') # Subtract 1 because Obj indexes start at one, indices = (int(vi) - 1, int(ti) - 1, int(ni) -current_face_group.tri_indices.append(indices) # Read all the textures used in the model for material in self.materials.itervalues(): model_path = os.path.split(fname)[0] texture_path = os.path.join(model_path, material.texture_fname) texture_surface = pygame.image.load(texture_path) texture_data = pygame.image.tostring(texture_surface, 'RGB', True) # Create and bind a texture id material.texture = glGenTextures(1) glBindTexture(GL_TEXTURE_2D, material.texture) glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR) glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR) glPixelStorei(GL_UNPACK_ALIGNMENT,1) # Upload texture and build map-maps width, height = texture_surface.get_rect().size gluBuild2DMipmaps( GL_TEXTURE_2D, 3, width, height, GL_RGB, GL_UNSIGNED_BYTE, texture_data) One of the first commands in an OBJ file is usually mtllib, which tells us the name of the material library file. When this command is encountered, we pass the file name of the material library to the read_mtllib method (which we will write later). If the command consists of geometry (vertex, texture coordinate, or normal), it is converted to a tuple of float values and stored in the appropriate list. For instance, the line v 10 20 30 would be converted to the tuple (10, 20, 30) and appended to self .vertices. rather than zero 1) Before each group of faces is a usemtl command, which tells us which material subsequent faces will use. When read_obj encounters this command, it creates a new FaceGroup object to store the material name and the face information that will follow. Faces are defined with the f command, and consist of a word for each vertex in the face (3 for triangles, 4 for quads, etc.). Each word contains indices into the vertex, texture coordinate, and normal lists, separated by a forward slash character (/). For instance, the following line defines a triangle where the first point uses vertex 3, texture coordinate 8, and normal 10. These triplets of indices are stored in the current face group, and will be used to reconstruct the model shape when we come to render it. Following the code to parse each line in the OBJ file we enter a loop that reads the textures in the material dictionary and uploads them to OpenGL. We will be using mip mapping and high-quality texture scaling. Was this article helpful?
https://www.pythonstudio.us/pygame-game-development/seeing-models-in-action.html
CC-MAIN-2019-51
refinedweb
937
57.06
From: Douglas Gregor (gregod_at_[hidden]) Date: 2002-02-20 12:16:05 On Tuesday 19 February 2002 06:19 pm, you wrote: > Documentation: > > * My experience with the mutex types in Boost.Threads suggests that > Signals needs a FAQ and one of the entries needed is a description of > how the noncopyable behavior of boost::signal does not have to mean > that objects containing a boost::signal need to be noncopyable as > well. > > * I think that maybe the tutorial should show an actual example of > using a "named slot" that uses something other than std::string for > the "name". Thank you, I'll add both. > *. > Tests: > > * The use of other Boost libraries in the test programs is at least a > little suspect. Failures in these other libraries may cause a mis- > representation of the suitability of Boost.Signals for a given > platform/compiler. Note that I did not evaluate whether or not it > was possible/useful to factor these other Boost libraries out of the > tests, however. There are three tests that fall into this category: dead_slot_test.cpp, deletion_test.cpp, and random_signal_system.cpp. The former two use Boost.Bind (this dependency could be removed without too much hassle). The random_signal_system test relies on the BGL to mimic the signal connection graph, and it would be horrible to rewrite all of that graph-handling code. Unfortunately, there are platforms (e.g., Borland C++) where the BGL does not work but Signals does. Maybe I can just document around the problem? > * Some of the tests never do any "assertions". This makes them > useless in automated regression testing. I'd recommend either > reworking them to make the proper "assertions" or to move them to the > examples directory. Egads, thanks. > Design/implementation: > > * I wonder if last_value, etc. should be promoted up to boost from > boost/signal. They seem to be useful concepts not tightly coupled to > the Boost.Signals library. If anyone has a use for it, sure. I just couldn't find any reason to do this except within Signals. > * I wonder if signals_common.hpp should be demoted down to > boost/signal/detail. This helps distinguish the code as detail with > out even opening the file. Okay, will do. > * I wonder if some of the names used in Boost.Signals are too common > for placing directly in namespace boost. I assume you're referring to 'connection' and 'trackable'? If so, I think the long-term goal should be to split the Signals library into two: a Tracking library and a Signals library. Everything related to tracking connections would become a part of the Tracking library, which the Signals library would use instead of having its own mechanism. Then it might not be so unreasonable for 'connection' and 'trackable' to be in the Boost namespace. > * It might be useful to have an overloaded connect() that takes a > second parameter of type trackable. This would allow an alternative > to derivation from trackable, though at the expense of a more > complicated interface. The trade off might be worth it to some, > though. I think I'd like to think about other alternatives to derivation before adding these overloads. I guess I'd prefer something like: sig.connect(track(some_function_object, a_trackable_object)); Then the notion of attaching additional trackable objects would be separate from the notion of connection. This goes back to the idea of having a Tracking library. Doug Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2002/02/25501.php
CC-MAIN-2019-18
refinedweb
583
67.76
Sickness and Stream of Conciousness I'm feeling under the weather so pardon this rather unstructured blog posting. In between sniffling, changing diapers, and eating too much, work is progressing nicely on the Heron interpreter. I am looking at an official Beta version (IOW a feature complete, Heron 1.0) for the end of January. I spent a lot of time hammering out the design of the module system by picking the brains of Gilad Bracha. He officially disproves of the design, but one major reason is because the super-class of classes is fixed, and I have hard-code namespaces. For many of us with a C++/Java/C#/Delphi background this may seem bizarre at face value, but there are some interesting languages out there which have much more flexible object-oriented systems (e.g. Smalltalk and Common Lisp). I like being able to rely on static architectures in software though, and I think it is key to constructing extremely large and robust software. If you can't even produce a UML diagram to describe your architecture because the class relations change at run-time, well then I think it is hopeless to manage software of any significant size or complexity. Anyway, I think the important thing that I took away from reading Gilads papers, and studying various module systems, is the fact that Heron modules can be instantiated ...
http://www.drdobbs.com/architecture-and-design/sickness-and-stream-of-conciousness/228701025
CC-MAIN-2015-40
refinedweb
231
59.64
So my hard drive is failing, and I'm getting a new one. I just did a fresh install of Win7 a few months ago, and I since it's a relatively fresh install, I was hoping I can just move my data to the new drive. I know there are many software programs that clone/backup drives, but I have a copy of TrueImage 2010, and I would like to use that (but I am open to better options). Below is my plan. I would like to know if there's a flaw in it so I can catch it before the point of no return (i.e., I got a new drive installed, but no way to restore my data). Backup my C: drive with TrueImage (It won't let me use the "Clone" feature since it does not detect 2 drives on my machine). Install new drive Partition drive (?) Use the TrueImage Boot Disc to restore from backup I know you said that you purchased Trueimage, however, if it is not straight forward and you are having problems, I would go with a free (and known-good/working) solution. Personally I would use Gparted which is one of my favourite tools. Simply boot in to it then select the drive and choose copy, then select the new drive and do paste followed by clicking apply. Another tool to use, which I know a lot of people are using now is Easeus Disk Copy. I personally do not like it as much, but it works - and if you do not like Gparted, it is nice to have a few options! Acronis TrueImage is very powerful. I'm mostly familiar with their live cd. If you are having trouble cloning your drive, first try their live CD before giving up on them. Also make sure that you have both drives hooked up properly and that both drives show up properly in the BIOS. It should let you clone from the live cd. These are other good backup tools that I am aware of that you can try if you don't like Acronis. PING Macrium Reflect CloneZilla By posting your answer, you agree to the privacy policy and terms of service. asked 4 years ago viewed 320 times active
http://superuser.com/questions/154336/swapping-hard-drives-and-moving-data?answertab=oldest
CC-MAIN-2015-18
refinedweb
380
87.45
E4/Resources/Work Areas The Scope of our Work is naturally the functionality provided by Eclipse Resources today, but also reaching into other areas. This page is about the implementation/design aspect of work. See the E4/Resources/Requirements, Use-Cases and Goals for user-facing descriptions of what we want to achieve. Contents - 1 Support Resources as of Today - 2 Big Rocks for Improvement - 3 Other Ideas and Work Items Support Resources as of Today What are Resources Today? See the Resources Wiki. In short: FCADBS = Files, Containers, Attributes, Deltas, Builders, Synchronization. Resources are an IDE concept. Clients expect them to model reality (file system) - important for external tools. It's probably not a good idea to try and generalize resources too much. EFS is stateless. Resources are adding state. - Files (==Streams), Folders (==Containers), Groups of them (==Projects) and their attributes (timestamp, encoding, nature, markers, ...) - Project is a logical group of resources which shares some common attributes and (currently) provides model-specific (nature) interpretation. In Eclipse, projects are always disjoint with each other (except for linked resources == aliases), which enables some concurrency. Projects can reference each other (dependencies). - Caching of attributes (statcache) with explicit refresh, notifications and synchronization, implemented with high performance and low memory footprint. - Resource Delta Notifications (Important: MoveDeleteHook for Team) also for Markers - Support for locking and atomic operations - Hooks for Natures, Model Mappings and Builders. Do these really belong into Resources? Perhaps, in the interest of RCP, these should be split off? There are several copies of the resource delta mechanism (also EMF and Databinding Observables have similar mechanisms), maybe the concept should be generalized. - Hooks for Team support and Synchronization - Special resource kinds (Derived, Hidden, Phantom, Team-private) Separation of Concerns is important - keep independent layers with clearly specified responsibilities. - org.eclipse.core.filesystem - Filesystem (EFS) - org.eclipse.core.contenttype - Content Types - Encodings and other resource attributes - org.eclipse.ide (the UI side of dealing with resources) - Working Sets - Builders and their attributes (UI) - org.eclipse.team (team support / synchronization) Big Rocks for Improvement The "Big Rocks" are themes where multiple individual items should likely be looked at in the full context, and the overall story of E4 Resources must be consistent. For the three Big Rocks defined, it might be possible to install separate working groups. Alias Management Linked Resources and Symbolic Links are a means of introducing overlapping resources (Aliasing). In case of an Alias, all contributors of the aliased item must be kept in sync. Currently, Eclipse provides some support for this in terms of Workspace aliases, but not for file system based aliases (symbolic links). - Alias Handling (Make Symbolic Links first-class citizens): bug 198291 If the file system provides aliasing (e.g. by symbolic links), it can have problematic effects on Eclipse: E.g. 2 editors editing 2 file paths which really refer to the same file may lead to data corruption; findFilesForLocation() sometimes fails; getCanonicalPath() is not implemented in EFS; and others. Some of these problems are hard to solve, because it's sometimes not clear whether two aliases of the same file should actually be resolved in the same context, or with different context. The goal is to use Eclipse UI to see symbolic links, create/modify them, and provide the necessary support to handle them properly (i.e. be aware of aliasing). Accurately represent file system structure in Eclipse. bug 185509 comment 2. Many large real-world projects do have symbolic links. Depending on the actual file system, symbolic links may not be the only kind of aliasing. Perhaps we need an extendable alias manager? Visualization Galileo, rest later? - Clarify the Semantics of Aliases. One context, or multiple contexts for an aliased file? One or multiple sets of markers? - Suggest multiple contexts by default, unless overridden by user Workspace and Project Structure To date, Eclipse Projects are quite tightly tied to the underlying file system structure. This has the advantage of Separation of Concerns: The question what's added or removed to a project is cared for by the underlying file system or Team Provider. Also, ensuring disjoint non-overlapping resources makes it easier to support IWorkspace.find*ForLocation() queries, which are important when associating attributes and settings with resources, as well as concurrency and notifications/listeners. On the other hand, not all real-world projects can be pushed into this tight jacket. Some of the items below may be possible to solve in the Galileo Stream, remaining backward compatible. - Support physically nested projects: This has been identified as one of the Top Ten Architectural Problems in all of Eclipse long ago (item 5): Real-world projects, especially C/C++, often don't fit the "flat, disjoint" project structure on the filesystem which Eclipse would like to see today. Make setup easier and more flexible. Loosen restrictions of where a Project can be created. Create sub-projects for team sharing, re-use and multi-language. Get legacy code into Eclipse more easily. bug 245412, and bug 210907 (Wizard). Maybe Galileo? - Flexible Project Structure: File-list based projects. Allow adding files from "anywhere" to a project by drag and drop, like Visual Studio or Tornado, or file list from automated scripts. Support linking by relative paths and environment variables. Question: Just improve Linked Resources, or come up with something totally new? In terms of separation of concerns and team support, the current Eclipse way (project structure implicitly defined by the file system) is better, but real world lives with file-list-based projects. Existing Proposal by Serge Beauchamp (Freescale) on bug 229633, Discussion on platform-core-dev mailing list, see also the CDT:Flexible_Project_Structure. Maybe Galileo? - Variable-based Linked Resources: See bug 229633. At the minimum, ${PROJECT_LOC} and Environment Variables. Ideally more based on context. Variables that can dynamically change are problematic (need to be tracked by Alias Manager). Dependencies may be problematic, if variable providers are pluggable and depend on resources themselves. - Non-Workspace Resources: Support searching resources outside the workspace from within Eclipse (bug 192767). Support loading files from anywhere into Eclipse by dbl clicking them (bug 60289 - RSE Local Subsystem provides this now). Resource Deltas for External Folders. Other editors can do this. Make Eclipse more pervasive and sticky. Galileo? - Multi-Workspace: Allow looking at multiple workspaces at the same time in one instance (bug 245399). Support multiple different versions of a project at the same time. Solves part of "Namespace Resolution" issue. Also buys user-level Preferences to span multiple workspaces. Impact unclear, but probably related to "Session" concept which E4 might get anyways. - Resource Tree Filters. bug 252996. Allow adding / excluding files and folders from the resource tree by pattern. Allow clients to define their view (perspective) on resources. May improve Performance at the cost of being less intuitive. Be sure to give visual feedback for what is excluded. Note that explicit exclusion seems to be supported in Eclipse 3.4 with the new IResource#HIDDEN flag. - Make Working Sets First-Class-Citizens. The Resource System should support declaration of resource groups (akin to working sets) by means of patterns (and/or enumeration). Client-defined perspectives on the Resource System: This would allow interested parties to register for notifications on their subset of items only. Could that be a generic notification filter on top of Core Resources, or does it need to be in the Core? - Solutions: Logical Nesting and Project Grouping. bug 35973. Handle groups of projects together for open, close, search etc. Optionally inherit settings. See bug 229633 comment 7 for instance. Is this necessary on Core/Resource level? Probably, for namespace resolution. Project References exist already, though with limitations bug 128397. Related to Multi-Workspace? See also Eclipse 4.0/Wishlist - Namespace Resolution: This is a hard one: Allow multiple projects with the same name in a workspace bug 35973 comment 89. Less problems re-using a project with a given name. Allow looking at multiple versions of same project in one workspace (often requested!) - Overlapping Resources. Has been requested, but I think these are a bad idea. Or is this just another name for Aliasing? Don't make this the default, but the exception. -- Adding lots of complexity at very little gain. Probably better modeled with physically nested smaller subprojects which can be referenced. Or light-weight projects which are like a linked resource folder (container), plus resource filters, but do not allow overlap and do not introduce any project metadata (so they could live as a nested subproject only). Probably linked Resources and Symbolic Links as 1st class citizens. Why is this needed? Sub-item of Flexible Resources. - Logical and Virtual Resources. Is this about project structure or meta-data? Currently part of Resources but is it a different layer? See org.eclipse.core.resources.mapping package. Does this need improvement? Logical resources seem to exist in order to enjoy Workspace-provided markers, change notification and synchronization services. EMF seems interested in mapping Models to Resources. Improved Metadata and Persistence - Attach arbitrary metadata to Resources and track its lifecycle. Provide a (pluggable) means for persisting that metadata, or driving the metadata directly from the file system. Might help builders a lot... bug 128100 seems related. - Pluggable Project Persistence. Why do we always save into .project? - What if we could directly edit (Devstudio, Netbeans, Maven...) projects just by means of a pluggable project persistence mechanism? Allow .project file to not be at project root. bug 78438 UI: how to list / import projects if the persisted project data has arbitrary file name/pattern? Content Types might help. - Sharing, Linking or Inheritance of Project Settings. bug 255371 Modeled Preferences, bug 194414 a the project level, and bug 70683 at the workspace level. Reduce duplication of settings in multiple projects, allow users to locally override any team settings. E.g. user-defined warning levels to try something out. Just a special case of a pluggable project persistence provider? Allow Preferences to span multiple workspaces. Simplify administration of many similar projects with globally administered settings. Makes projects more manageable at the cost of being less understandable. Should projects always be self-contained? How to share settings? - Project settings are really a level above plain Resources. Different clients (natures) can implement this differently, e.g. Maven settings. It would still be good to have some commonality in concept. Very Interesting for large organizations, but is it aligned with our most important work items? - Probably interesting related to Maven for build, which is hierarchical. - Multiple projects in 1 directory. Another use-case of pluggable project persistence? Nokia Request to be like Visual Studio: Multiple project files for users to easily find them. Not sure if this isn't only asking for trouble... perhaps better have project references in one folder, which reference the actual projects in subfolders ala bug 78438. This request generates overlap, needs filters. Do we really need this? MSA: I think this type of composition should be not be done at the Resource level. Something like working sets could be used for that. - Workspace Description Files: Allow opening a workspace or project by double clicking a file (like in MS Visual Studio), bug 245405. Facilitated on-boarding a team including import of all Preferences. Also for users with multiple workspaces (simpler switching). Maybe Galileo? - Add/Remove project type/nature. Meta-info about the project is a layer above plain Resources. Currently adding/removing a nature requires editing the project description file :-( Supporting addition of natures "officially" may make projects more flexible. But sometimes, new (additional) natures are likely better represented with a separate physically nested subproject for the new nature. Non-Local Resources The draft E4/Project Proposal talks about the E4 mission being to build a next generation platform for pervasive, component-based applications and tools. Distributed computing is an industry trend and must be supported. The question is, at what layers this support should reside. EFS provides transparent addition of remote resources already now. But the concept of "Deep Refresh" is problematic with remote resources and new concepts may be needed. In many cases, clients will need to be aware of resources being "remote" at a high level. - Non-Local Resources. Allow parts of the workspace to be virtual or non-local, represented by "The Network". Make non-local / non-physical elements first class citizens. Improvements of EFS, lazy Refresh, virtual resources, asynchronous calls etc. - see also IUniversalPath idea. Lots of work and risk. I'd strongly propose to keep "non-local stuff" and "local stuff" separate (e.g. separate projects, dont mix them). EFS has shown that transparently adding remote stuff is problematic: In terms of backward compatibility, old-style clients of the old API will never treat network failures and latency properly for non-local resources. It must be explicit. - Remote Workspaces. In a client/server Eclipse, the Workspace may be non-local. Not sure how this is related to non-local resources. Remote Projects might make more sense than fully remote workspaces? Or, in a multi-workspace scenario allow one (some) workspaces to be remote? John Arthorne has proposed that in a Client/Server based Eclipse, the whole workspace together with some code for caching and management could be remote (i.e. the code directly accessing the resources would be on the same machine as the workspace). - Weak Refresh and Precomputed Stat. For large dynamic clearcase views, or distributed workspaces on slow remote systems, a global workspace refresh is unacceptably slow. Support parts of the workspace to run with weaker refresh policy or import precomputed stat info. - Caching and Synchronization of workspace resources with other partners. Currently exposed by ISynchronizer in core.resources but isn't that another layer? In terms of separation of concerns, think about layers for remote support. - EFS Notification API. With improved asynchronous support, EFS will probably need a notification API - bug 112980. Or, EFS gets replaced by something else such as Apache Commons VFS. Improve Concurrency and Programming Model Multi-Core is a clear industry trend and needs to be accounted for by the Core architecture supporting improved concurrency. This is important in order to scale, especially with remote resources (which are slow to access). Easier programming means less time and less bugs for everyone. This likely needs to be addressed in the context of all of E4 (to get consistent, pervasive patterns and idioms for concurrency and programming models). - Listener Order and Race Conditions: Right now, when a <nop>ResourceModificationListener performs some work as part of listening, when are other listeners notified? The order matters (e.g. JDT vs. CDT), requiring awkward workarounds. Especially during Project Open, events may not be received by parties not yet instantiated. Should there be a history of such events? Some clear APIs for influencing the order of listeners are needed, and ability to notify well-known "owners" of resources before all others. - Asynchronous APIs. Several resource operations are documented as potentially long-running. Most of these are synchronous. Asynchronous APIs might help improving workspace concurrency, since these may allow clients to give up unnecessary locks while the operation is running. - Improve Workspace Concurrency. SchedulingRules and Jobs, often need to lock the entire Workspace unnecessarily (bug 240888). Current Model makes it hard to have multiple background jobs run on disjoint parts of the workspace at the same time. Would this be fixed by more asynchronous access? How much do we really need this? - Avoid too much work in ResourcesPlugin#start(). See bug 181998 Other Ideas and Work Items Depending on the priority that people working on E4/Resources think the following items have, some of them may be promoted to "big rocks". For now, these are listed separately because they seem smaller, or not directly related to the Core Resources layer (so they could probably be addressed on other layers, or by 3rd party plugins on top of core resources). - Shared Reference Workspace: Related to Multi-Workspace: Allow multiple users read-only access to a single shared workspace at the same time. Ideal for browsing shared 3rd party libs. Team-sharing more setup for shared resources. - Modeling the Resource System. Do we need this? Resources are performance critical. EMF might be interesting for arbitrary attached attribute, listeners and undo/rollback, but is it needed? Perhaps on a separate modeling/attributes layer on top of core resources? Decide together with "Modeling the Workbench", Eclipse Application Model, and a common Listener / Concurrency model. - Getting rid of Project for RCP. Is the notion of "Project" a plumbing or User Artifact? Projects are overloaded. Where should builders etc be hung off? What is the relevant Core of the Resource System that may be interesting for RCP and may be worth stripping? Layers Other than Core Resources - Improve Working Sets. On the UI Working Sets, improvements are also due. Working sets by pattern, with automatic addition / removal. Better team sharing for working sets. - Improvements to Content-Types. UI for Project-specific content types. Associate files with a correct icon/editor even if the file extension is not unique. Adapt to legacy structures more easily (case sensitive, patterns). - Contenttypes are a separate plugin already now. Likely possible as a non-breaking incremental improvement. Enumeration might be a workaround. - Builders. ICommand and builders extension point. Do these belong into resources? Incremental builders are at the core of the Platform, but couldn't they also be clients of resource delta notifications? There is a need to make ICommand extensible in order for new CDT build to be better integrated and store its settings more easily. Related to Resource Attributes.
http://wiki.eclipse.org/E4/Resources/Work_Areas
CC-MAIN-2014-52
refinedweb
2,916
59.6
Canonical Voices<>Jussi Pakkanen: But can we make it faster?2013-11-21T13:21:34ZJussi Pakkanennospam@nospam.com<p>A common step in a software developer’s life is building packages. This happens both directly on you own machine and remotely when waiting for the CI server to test your merge requests.</p> <p>As an example, let’s look at the <a href="">libcolumbus</a> package. It is a common small-to-medium sized C++ project with a couple of dependencies. Compiling the source takes around 10 seconds, whereas building the corresponding package takes around three minutes. All things considered this seems like a tolerable delay.</p> <h1>But can we make it faster?</h1> <p.</p> <p>Clearly, reducing the last part brings the biggest benefits. One simple approach is to store a copy of the chroot after dependencies are installed but before package building has started. This is a one-liner:</p> <pre>sudo btrfs subvolume snapshot -r chroot depped-chroot</pre> <p>Now we can do anything with the chroot and we can always return back by deleting it and restoring the snapshot. Here we use -r so the backed up snapshot is read-only. This way we don’t accidentally change it.</p> <p>With this setup, prepping the chroot is, effectively, a zero time operation. Thus we have cut down total build time from 162 seconds to 13, which is a 12-fold performance improvement.</p> <h1>But can we make it faster?</h1> <p>After this fix the longest single step is the compilation. One of the most efficient ways of cutting down compile times is CCache, so let’s use that. For greater separation of concerns, let’s put the CCache repository on its own subvolume.</p> <pre>sudo btrfs subvolume create chroot/root/.ccache</pre> <p>We build the package once and then make a snapshot of the cache.</p> <pre>sudo btrfs subvolume snapshot -r chroot/root/.ccache ccache</pre> <p>Now we can delete the whole chroot. Reassembling it is simple:</p> <pre>sudo btrfs subvolume snapshot depped-chroot chroot sudo btrfs subvolume snapshot ccache chroot/root/.ccache</pre> <p>The latter command gave an error about incorrect ioctls. The same effect can be achieved with bind mounts, though.</p> <p>When doing this the compile time drops to 0.6 seconds. This means that we can compile projects over 100 times faster.</p> <h1>But can we make it faster?</h1> <p.</p> <p>If we look a bit deeper we find that these are all, effectively, single process operations. (Some build systems, such as <a href="">Meson</a>, will run unit tests in parallel. They are in the minority, though.) This means that package builders are running processes which consume only one CPU most of the time. According to usually reliable sources package builders are almost always configured to work on only one package at a time.</p> <p.</p> <h1>The small print</h1> <p>The numbers on this page are slightly optimistic. However the main reduction in performance achieved with chroot snapshotting still stands.</p> <p>In reality this approach would require some tuning, as an example you would not want to build LibreOffice with -j 1. Keeping the snapshotted chroots up to date requires some smartness, but these are all solvable engineering problems.<>Michael Hall: Ubuntu App Developers community on Google Plus2012-12-07T03:57:34ZMichael Hallnospam@nospam.com<p>Now that Google+ has added a Communities feature, and seeing as how Jorge Castro has <a href="" target="_blank">already created</a>.</p> <ul> <li><a href="" target="_blank">Ubuntu App Developer Community</a></li> </ul> <p>Google+ communities are brand new, so we’ll be figuring out how best to use them in the coming days and weeks, but it seems like a great addition.<!</p>Michael Hall: Looking for a few good devs2012-09-11T20:03:39ZMichael Hallnospam@nospam.com<p><a href=""><img class="alignleft size-full wp-image-1288" title="jonowantsyou" src="" alt="" width="324" height="348" /></a>More than a few, actually. As part of our ongoing focus on App Developers, and helping them get their apps into the Ubuntu Software Center, we need to keep the <a href="" target="_blank">Application Review Board</a> (ARB) staffed and vibrant. Now that the App Showdown contest is over, we need people to <a href="" target="_blank">step up and fill the positions</a> of those members who’s terms are ending. We also want to grow the community of app reviewers that work with the ARB to process all of the submissions that are coming in to the MyApps portal.<span id="more-1287"></span></p> <h2>ARB Membership</h2> <p>Two of the existing members, Bhavani Shankar and Andrew Mitchell, will be continuing to serve on the board, and Alessio Treglia will be joining them. But we still need four more members in order to fill the full 7 seats on the board. <a href="" target="_blank">ARB applicants</a> must be Ubuntu Members, Ubuntu Developers, and should create a wiki page for their application.</p> <p>ARB members help application developers get their apps into Software Center by reviewing their package, providing support and feedback where needed, and finally voting to approve the app’s publication. You should be able to dedicate a few hours each week to reviewing apps in the queue, and discussing them on IRC and the ARB’s mailing list.</p> <p>If you would like to apply, you can contact the current ARB members on #ubuntu-arb on Freenode IRC, or the team mailing list (<a href="" target="_blank">app-review-board at lists.ubuntu.com</a>). The current term will expire at the end of the month, so be sure to get your applications in as soon as you can.</p> <h2>ARB Helpers</h2> <p>In addition to the 7 members of the ARB itself, we are building a community of volunteers to help review submitted packages, and work with the author to make the necessary changes. There are no limits or restrictions on members of this community, though a rough knowledge of packaging will surely help. This group doesn’t vote on applications, but they are essential to helping get those applications ready for a vote.</p> <p>The ARB helpers community was launched in response to the overwhelming number of submissions that came in during the App Showdown competition. Daniel Holbach put together <a href="" target="_blank">a guide</a> for new contributors to help them get started reviewing apps, and you can still follow those same steps if you would like to help out.</p> <p>Again, if you would like to get involved with this community, you should join #ubuntu-arb on Freenode IRC, or contact the mailing list (<a href="" target="_blank">app-review-board at lists.ubuntu.com</a>).<: Why you should ‘Download for Ubuntu’2012-06-15T22:04:53ZMichael Hallnospam@nospam.com<p><a href=""><img class="alignleft size-full wp-image-1195" title="downloadonubuntubutton" src="" alt="" width="122" height="49" /></a>Last week we <a href="" target="_blank">introduced</a> a new ‘Download for Ubuntu’ campaign for upstreams to use on their websites, letting their users know that the app is available in Ubuntu already. We event generated a <a href="" target="_blank">list of targeted upstreams</a> we wanted to reach out to in order to spur the adoption of these buttons. What we didn’t go into much detail about why upstreams should use them. I hope to remedy that here.<span id="more-1179"></span></p> <h2>It’s easy</h2> <p <a href="" target="_blank">App Directory</a> entry for the application itself, not any specific version of it.</p> <h2>It makes installing your app more appealing</h2> <p.</p> <h2>It’s good social exposure for your app</h2> <p.</p> <h2>Users will be looking for it</h2> <a href="" target="_blank">lazy Google search</a>), the promise of a quick and easy install might just be the difference between a new user and a lost opportunity.</p> <p <a href="" target="_blank">add them to our list</a> so we know who’s contacting them, and what the status is.</p> <pre><a href="{{pkgname}}/"> <img src="" title="Download for Ubuntu" alt="Download for Ubuntu button" width="122" height="49" /> </a></pre> <hr /> .</p> <h2>Package fixes</h2> <p <a href="" target="_blank">Stable Release Update</a>.</p> <h2>Backport newer versions</h2> <p <a href="" target="_blank">backporting</a> new versions of packages to stable releases of Ubuntu. And starting with 11.10, this repository is enabled by default.</p> <p <span><a href="" target="_blank">submit your package</a></span> to be included in the development release. Once it’s there, you can <a href="" target="_blank">request</a> that it be backported to one more more stable Ubuntu releases. You can use the <em>requestbackport</em> command line tool (from ubuntu-dev-tools package) to automate much of the process, or if you’re not running Ubuntu simply <a href="" target="_blank">file a bug</a> to start the request.</p>Michael Hall: Pkgme help and mentors2012-06-12T15:21:08ZMichael Hallnospam@nospam.com<p>Expanding on my previous post <a title="Contributing to Ubuntu: A better Pkgme" href="" target="_blank">calling for pkgme backend contributors</a>, <a href="">James Westby</a> (james_w) in the <a href="" target="_blank">#pkgme</a> channel on freenode.</p> <h2>Qt/qmake</h2> <p.</p> <p>Information about qmake: <a href="" target="_blank"></a></p> <p>Help Contact: <a href="" target="_blank">Angelo Compagnucci</a></p> <h2>Flash</h2> <p>Flash applications can be packaged for Ubuntu by wrapping them in a GTK window that contains a Webkit browser widget, and an index.html file for it to load that embeds the given flash file.</p> <p.</p> <p>Help Contacts: <a href="" target="_blank">Michael Terry</a> and <a href="" target="_blank">Stuart Langridge</a></p> <h2>HTML5</h2> <p>A backend for an HTML5 application would also require wrapping the target application in a GTK window with embedded Webkit widget. Only instead of creating an index.html, you would just point the Webkit widget to the target application’s HTML files.</p> <p>Help Contact: <a href="" target="_blank">Didier Roche</a></p> <h2>Java</h2> <p>The Java backend would need to parse ant’s build.xml files to extract information about the target application or an already built jar file’s manifest.</p> <p>Help Contact: <a href="" target="_blank">James Page</a></p> <h2></h2>Michael Hall: Contributing to Ubuntu: A better Pkgme2012-06-07T09:00:02ZMichael Hallnospam@nospam.com<p><a title="Pkgme Website" href="" target="_blank">pkgme</a> is a small utility created by <a href="" target="_blank">James Westby</a>, <a href="" target="_blank">backends</a>.</p> <p.</p> <p><strong>UPDATE:</strong> Here is a <a title="Pkgme help and mentors" href="" target="_blank">list of desired backends and mentors</a> to help you with them.</p> <p><span id="more-1108"></span></p> <h2>But I don’t know how to create packages!</h2> <p.</p> <h2>Ok, I’m interested, how do I start?</h2> <p>First of all, get a copy of the latest pkgme code from its bazaar branch in Launchpad:</p> <pre>bzr branch lp:pkgme ./pkgme</pre> <p>Then, create a VirtualEnv environment to install it into:</p> <pre>virtualenv ./env</pre> <p>Then, install it into the Virtualenv:</p> <pre>source ./env/bin/activate cd ./pkgme python setup.py develop</pre> <p>Now you’ve got a working pkgme installed and running in your virtualenv. You can leave your virtualenv by running ‘deactivate’. Time to get started on your backend!</p> <h2>Where do I put my new backend code?</h2> <p>Since we’re going to submit your new <a href="" target="_blank">backend</a> to the pkgme branch, we can just create it there:</p> <pre>cd .. mkdir ./pkgme/pkgme/backends/<your backend name></pre> <h2>Great, now I have an empty Backend, what do I put here?</h2> <p.</p> <p>Your want file is executed from the target application’s directory, so in your script ./ will be the root of the target application’s directory. This lets you script easily browse through the files in the application to determine how well it can provide packaging information for it.</p> :</p> <ul> <li>0 – no information can be provided about the project (e.g. a Ruby backend with a Python project).</li> <li>10 – some information can be provided, but the backend is generic (e.g. Ruby backend).</li> <li>20 – some information can be provided, and the backend is more generic than just language (e.g. Ruby on Rails backend).</li> <li>30 – some information can be provided, and the backend is highly specialized.</li> </ul> <h2>Now I have what I want, what do I do with it?</h2> .</p> <h3>Lots of scripts</h3> <p>For separate scripts, you will need to provide an executable in your backend directory for <a title="List of pkgme information fields" href="" target="_blank">each of the pieces of information</a> that pkgme might request. Each script should print that information to STDOUT, or exit with an error if it can not provide it.</p> <h3>Just one script</h3> <p <a href="" target="_blank">all the pieces of information</a>.</p> <p>You can test your new backend by switching to the directory of a project your backend is made to support, and running:</p> <pre>pkgme</pre> <p>Make sure your virtualenv is still activated, or pkgme won’t be found. If everything works, you should have a ./debian/ directory in the application’s root folder.</p> <h2>Hurray, my backend works. Do you want it?</h2> <p>Of course we want it! What a silly question. And it’s already in your local branch of pkgme too! Well, it’s in the directory anyway, you still need to add it to the workingset:</p> <pre>cd ./pkgme/pkgme/backends/ bzr add <your backend name></pre> <p>Then commit your changes and push them back to Launchpad:</p> <pre>bzr commit -m "Added backend for <your backend name>" bzr push lp:~<your lp username>/pkgme/add-backend-<your backend name></pre> <p>Then head on over to <a href=""><.</p>Michael Hall:>Michael Hall: Contributing to Unity for Artists: SVG Icons2012-03-19T13:42:41ZMichael Hallnospam@nospam.com<p>Everybody knows that programmers can contribute to Unity, and I’ve shown in my <a title="Contributing to Unity for non-developers: Quicklists" href="" target="_blank">previous</a> <a title="Contributing to Unity for non-developers: Keywords" href="" target="_blank">posts</a> that non-developers can still contribute features and fixes that make applications integrate better. But what if your skills lay more on the creative side of the spectrum?</p> <p>Well it just so happens that you have something to contribute to Unity too. In fact, we’re currently in need of some graphic design talent to put some extra polish on some areas of application integration. Specifically, we need people to help create vector art for application icons that only have raster images, PNG, XPM, etc.</p> <p>This <a href="" target="_blank">wiki page</a> contains a list of applications that have been identified as needing an SVG icon.</p> <p>Now graphic creation isn’t my specialty, so I’m not going to write a step by step guide to creating these images, that’s up to you artists. What I am going to do, however, is walk you through the process of coordinating with the upstream application developers and submitting your finished image to Ubuntu.</p> <h2>1) Contact the upstream</h2> <p>This is an important step, because even if an application doesn’t have an SVG icon in Ubuntu, there’s still a chance that one already exists. Read over the first half of my post on <a title="Swimming Upstream: What to do next with your Quicklists" href="" target="_blank">upstreaming Quicklists</a> for ways to get in contact with with them. Ask them if they have an SVG source for their application’s icon. If they do, that’s great! You can take that and skip down to step #3. If they don’t, then you will need to work with the upstream project to create one that is right for them.</p> <h2>2) Work with the current image</h2> <p>It’s important that we don’t try and re-brand an application unless the authors want it re-branded. What we want is a more flexible/scalable version of the image icon we already have. If you are creating a new SVG file, try to keep as close to the raster image as possible, and be sure to talk to the upstream developers about any deviations or changes you need to make. And finally, keep with the spirit of open source and make your new image available to both Ubuntu and the upstream project under a copy-left license like the <a href="" target="_blank">CC-BY-SA</a> or another permissive license of the upstream’s preference.</p> <h2>3) Preparing your image</h2> <p>Since we are getting close to the release of 12.04, the requirements for any further changes are getting stricter. In order to get your image into the Precise packages, you will need to meet the following two criteria:</p> <p>It <strong>must</strong> be approved by the upstream project. Since your image will be representing their application in Ubuntu, we absolutely need their acceptance of it before it can be used. This is why step #1 is so vitally important, make sure you are working and communicating closely with upstream from the very beginning.</p> <p>It <strong>must</strong> be a plain SVG file. This is because it will be added as a patch file against the package, and patch files don’t work well with binary data. Since a plain SVG file is text, not binary, it makes it much easier to convert into a patch.</p> <div></div> <h2>4) Submit your new image</h2> <p>The <a href="" target="_blank">wiki page</a> containing the list of applications has a link to the corresponding bug report filed in Launchpad. When your image is ready, attach it to the bug report.</p> <p><a href=""><img class="alignnone size-full wp-image-922" title="attach_file_to_bug" src="" alt="" width="416" height="350" /></a></p> <p>You will also need to add the upstream project to the bug report. Click the “Also affects project” link on the bug page, and choose the Launchpad Project that matches your upstream.</p> <p><a href=""><img class="alignnone size-full wp-image-936" title="also_affects_project" src="" alt="" width="416" height="196" /></a></p> <p>That’s it! Well, almost. Once we have your image, the application’s package in Ubuntu will need to be updated to use it, but that will require some changes to packaging scripts and patch files, which will be the subject of a more technical post. But getting the necessary image is itself a big step.</p>Michael Hall: Contributing to Unity for non-developers: Package patching2012-03-08T09:00:43ZMichael Hallnospam@nospam.com<p>B.</p> <p.</p> <p>Below I’m going to show you how to turn your code change into a package patch that is easy for Ubuntu developers to add to the distro’s packages. Only do this if your submitted branch is to a package in main <strong>and</strong> it hasn’t already been merged.</p> <h2>0) Check your source package format</h2> <p>The following instructions will only work on source packages using quilt 3.0 for managing patches. Before you do anything else, check that the file debian/source/format contains the following:</p> <pre>3.0 (quilt)</pre> <p> </p> <h2>1) Find your revisions</h2> <p:</p> <pre>bzr missing --mine-only ubuntu:geany</pre> <p.</p> <p> </p> <h2>2) Generate the patch</h2> <p>Fortunately the package “bzr-builddeb” provides a command that makes this step easy.</p> <pre>mkdir -p debian/patches bzr dep3-patch -d ubuntu:geany . > debian/patches/add_keywords.patch</pre> <p>Again, just replace ‘geany’ with your application’s branch name, and dep3-patch will find the differences in your branch and convert them into a patch file.</p> <p>Now that you have a patch file, we need to add it to the list of patches for this package. To do that, all you need is to add it’s name to the end of the debian/patches/series file like this:</p> <pre>echo add_keywords.patch >> debian/patches/series</pre> <p> </p> <h2>3) Convert your source changes</h2> <p.</p> <pre>bzr diff -r 32..31 | bzr patch</pre> <p>This causes bzr to generate a reverse-diff of your changes (by going from the higher to the lower revision), and then apply that reverse-diff to your current code, effectively undoing your changes.</p> <p>Now you need to apply your new patch file using quilt, so that quilt knows about it:</p> <pre>quilt push -a</pre> <p>Which should give you the following output if everything applies cleanly (if not, then your package is going to need some extra work, and you should ask for help from someone in <a href="" target="_blank">#ubuntu-devel</a> on freenode IRC).</p> <pre>Applying patch add_keywords.patch patching file geany.desktop.in Now at patch add_keywords.patch</pre> <p> </p> <h2>4) Log your changes</h2> <p>Since you are making changes to the package itself now, you need to add that information to the debian/changelog:</p> <pre>exportSecure APT</a> also requires a <span>Release</span> and <span>Release.gpg</span> file signed with a known key. The scan.sh file sets all this up, using the apt-ftparchive command. The first apt-ftparchive call creates the <span>Sources</span> and <span>Directory:</span> headers in <span>Directory:</span> headers.<br /><br />The second call to apt-ftparchive creates the <span>Packages</span> and <span>Packages.gz</span> files. As with the source files, we get some outside-the-chroot paths leaking in, this time as path prefixes to the <span>Release</span> file. I shameless stole this from the security team's <a href="">update_repo</a> script. The tricky part here is getting <span></div><blockquote><pre>/var/lib/sbuild/apt-keys/sbuild-key.pub</pre></blockquote>won't be available inside the chroot, the script copies it to what will be <span>>/repo</span> is mounted in the chroot too. All prep.sh needs to do is add a <span>sources.list.d</span> entry so apt can find your local repository, and it needs to add the public key of the sbuild signing key pair to apt's keyring. After it does this, it needs to do one more <span>apt-get update</span>. It's useful to know that at the point when sbuild calls prep.sh, it's already done one <span>/repo</span> mounts for each different chroot</li><li>A command line switch to disable the <span>: PEP 382 sprint summary2011-06-22T18:12:00ZBarry Warsawnoreply@blogger.com,> __init__.py</span> file, e.g. <span>zope/__init__.py</span> for all the Zope packages. In essence, it eliminate the need for these by introducing a new variant of <span>.pth</span> files to define a namespace package. Thus, the <i>zope.interfaces</i> package would own <span>zope/zope-interfaces.pth</span> and the <i>zope.components</i> package would own <span>zope/zope-components.pth</span>. The presence of either <span>.pth</span> file is enough to define the namespace package. There's no ambiguity or collision with these files the way there is for <span>*</span> in the <span>>import.c</span>, much of which predates PEP 302. That PEP defines an API for extending the import machinery with new loaders and finders. Eric proposed that we could simplify <span>importlib</span> to share a lot of code with the built-in C import machinery. This could have the potential to greatly simplify <span!
http://voices.canonical.com/feed/atom/tag/packaging/
CC-MAIN-2015-11
refinedweb
4,093
63.7
PartDesign Fillet Description This tool creates fillets (rounds) on the selected edges of an object. A new separate Fillet entry (followed by a sequential number if there are already existing fillets in the document) is created in the project tree. Usage - Select a single or multiple edges or a face on an object, then start the tool either by clicking its icon or going into the menu. In case you selected a face all its edges are respected for filleting. - In the appearing Task panel set the fillet radius either by entering the value, or by clicking on the up/down arrows. - fillet will propagate along the chain. - To edit the fillet after the function has been validated, either double-click on the Fillet label in the Project tree, or right-click on it and select Edit Fillet. PartDesign Fillet vs. Part Fillet PartDesign Fillet is not to be confused with Part Fillet of the Part Workbench. Although they share the same name, they are not the same, and are not used the same way. Here is how they differ from each other: - The PartDesign Fillet is Parametric. After a fillet has been applied, its radius can be edited; this is not possible with the Part Fillet. - The PartDesign Fillet creates a separate Fillet entry (followed by a sequential number if there are already existing fillets) in the Project tree. The Part Fillet becomes the parent of the object it was applied to. - The PartDesign Fillet offers a live preview of the fillet applied to the object before validating the function. - The Part Fillet supports variable radii (with a start radius and an end radius). The PartDesign fillet doesn't. Known Issues one after located in libTKBRep.so, libTKFillet.so, etc., which are OCCT libraries. If this type of crashes occurs, the problem may need to be reported and solved in OCCT rather than in FreeCAD. See the forum threads for more information: The user is also responsible for the integrity of his or her own model. Depending on the model, it may be impossible to perform a fillet or chamfer if the body is not big enough to support that operation. For example, it wouldn't be possible to create a 10 mm fillet if an edge is separated only 5 mm from the next surface. In that case, the maximum radius for a fillet would be 5 mm; trying to use a larger value may result in a shape that doesn't compute, or even a crash. If using the exact limit of 5 mm doesn't work, it may be possible to use a very close approximation, like 4.9999 mm, to produce the same visible result. Topological naming Edge numbers are not completely stable, therefore it is advisable that you finish the main design work of your solid body before applying features like fillets and chamfers, otherwise edges could change name and filleted edges would likely become invalid. Read more in topological naming problem. Scripting The tool Fillet can be used in a macro, and, from the Python console using the following function : Box = Box.makeFillet(3,[Box.Edges[0]]) # 1 Fillet Box = Box.makeFillet(3,[Box.Edges[1],Box.Edges[2],Box.Edges[3],Box.Edges[4]]) # for several Fillets - 3 = radius - Box.Edges[2] = Edge with its number Example : import PartDesign from FreeCAD import Base Box = Part.makeBox(10,10,10) Box = Box.makeFillet(3,[Box.Edges[0]]) # pour 1 Fillet Box = Box.makeFillet(3,[Box.Edges[1],Box.Edges[2],Box.Edges[3],Box.Edges[4]]) # for several Fillets Part.show(Box) -
https://wiki.freecadweb.org/PartDesign_Fillet
CC-MAIN-2020-40
refinedweb
599
64.2
Post. The mentioned case is common in a web application. Not all API’s fit in the mentioned. But there is a use case. Postgres Example Consider two tables, author and book with the following schema. Postgres function row_to_json convert a particular row to JSON data. Here is a list of authors in the table. This is simple, let me show a query with an inner join. The book table contains a foreign key to author table. While returning list of books, including author name in the result is useful. As you can see the query construction is verbose. The query has an extra select statement compared to a normal query. The idea is simple. First, do an inner join, then select the desired columns, and finally convert to JSON using row_to_json. row_to_json is available since version 9.2. The same functionality can be achieved using other function like json_build_object in 9.4. You can read more about it in the docs. Python Example Postgres drivers pyscopg2 and pg8000 handles JSON response, but the result is parsed and returned as a tuple/dictionary. What that means, if you execute raw SQL the returned JSON data is converted to Python dictionary using json.loads. Here is the function that facilitates the conversion in pyscopg2 and pg8000. The psycopg2 converts returned JSON data to list of tuples with a dictionary. One way to circumvent the problem is to cast the result as text. The Python drivers don’t parse the text. So the JSON format is preserved. Carefully view the printed results. The printed result is a list of tuple with a string. For SQLAlchemy folks here is how you do it Another way to run SQL statement is to use text function. The other workaround is to unregister the JSON converter. These two lines should do import psycopg2.extensions as ext ext.string_types.pop(ext.JSON.values[0], None) Here is a relevant issue in Pyscopg2..
https://kracekumar.com/post/156769849745/return-postgres-data-as-json-in-python/
CC-MAIN-2022-40
refinedweb
325
69.38
Gets or sets the size of the XRTableRow. Namespace: DevExpress.XtraReports.UI Assembly: DevExpress.XtraReports.v18.2.dll The size of the XRTableRow is the size of the Rectangle object returned by its XRTableRow.BoundsF property. The following example demonstrates how to create an XRTableRow object at runtime, via the XRTableRow.CreateRow method, and set some of its main properties. using System.Drawing; using DevExpress.XtraReports.UI; // ... public XRTableRow CreateXRTableRow() { // Create a table row containing three cells. XRTableRow row = XRTableRow.CreateRow(new SizeF(300F, 50F), 3); // Make the borders visible for all cells. row.Borders = DevExpress.XtraPrinting.BorderSide.All; // Set the border width for the row's cells. row.BorderWidth = 2; // Set the size of the cells. row.Cells[0].SizeF = new SizeF(120F, 50F); row.Cells[1].SizeF = new SizeF(90F, 50F); row.Cells[2].SizeF = new SizeF(90F, 50F); return row; }
https://docs.devexpress.com/XtraReports/DevExpress.XtraReports.UI.XRTableRow.SizeF
CC-MAIN-2018-51
refinedweb
143
55
Hi On Mon, Apr 02, 2007 at 12:05:07AM +0800, Xiaohui Sun wrote: > M. optimization is certainly nice yes but you dont need that much complexity for it > > [...] > > > >[...] > > > >>+? maybe though maybe i missunderstand you again but you definitly should check the thing so that it wont segfault when dereferenced > > >>+. it is buggy (=not working) > > > > >[...] > > > >>+ case PIX_FMT_RGB32: > >>+ dimension = SGI_MULTI_CHAN; > >>+ depth = SGI_RGBA; > >>+ break; > >> > > > >i think this is broken on either little or big endian > > > >[...] > > > > should I use PIX_FMT_RGBA instead, which is endian independant. you should use the correct one,that is the one which works they are documented in libavutil/avutil.h > > >>+ return (buf - orig_buf); > >> > > > >superfluous () > > > I need to calculate the encoded size here. yes you do but you do not need: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2007-April/020944.html
CC-MAIN-2016-30
refinedweb
121
75.3
Roan Kattouw <roan.katt...@gmail.com> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |roan.katt...@gmail.com Status|NEW |RESOLVED Resolution| |FIXED --- Comment #1 from Roan Kattouw <roan.katt...@gmail.com> 2010-01-19 04:49:00 UTC --- I've created the WS: alias. There was a page at [[WS:sheer]] that couldn't be moved to [[Wikisaurus:sheer]] because the latter already existed; it was moved to [[Wikisaurus:sheer/BROKEN]] instead, and it's probably a redirect that can safely be removed. I can't create the WT: alias because there's already a WT: namespace; after you've moved any pages and redirects in there to the Wiktionary: namespace (remember that WT:Foo and Wiktionary:Foo will be equivalent after I add the alias) and deleted the WT: pages, I can add the alias. A list of WT: pages is at: . Note that there's also a handful of talk pages:: -- Configure bugmail: ------- You are receiving this mail because: ------- You are the assignee for the bug. You are on the CC list for the bug. _______________________________________________ Wikibugs-l mailing list Wikibugs-l@lists.wikimedia.org
https://www.mail-archive.com/wikibugs-l@lists.wikimedia.org/msg31899.html
CC-MAIN-2018-47
refinedweb
185
72.87
note syphilis Hi,<br><br> GWL_USERDATA is defined as -21 in winuser.h.<br> Near the beginning of scintilla.xs, immediately following the line: <c> #include "./include/Scintilla.h" </c> I would insert: <c> #ifndef GWL_USERDATA #define GWL_USERDATA (-21) #endif </c> I don't think that insertion will create any problems. If it does, then we probably start to think that the GWL_USERDATA definition was skipped in the strawberry build for a very good reason. <br><br>Anyway, see how that goes - and if it fixes the problem, maybe ask the Strawberry developers (by posting to the [ mailing list]) just how/why it is that their winuser.h definition of GWL_USERDATA is being skipped for Strawberry Perl.<br><br>Cheers,<br>Rob 1035040 1035040
http://www.perlmonks.org/?displaytype=xml;node_id=1035051
CC-MAIN-2018-05
refinedweb
124
57.87
Learning ColdFusion 9: CFScript Updates For Tag Operators Posted July 24, 2009 at 9:33 AM by Ben Nadel ColdFusion 9 has made a number of upgrades to CFScript. Yesterday, I explored the CFScript-based updates in ColdFusion Components; today, I wanted to take a quick look at the other CFScript operators that were added to mimic ColdFusion tag functionality. When it comes to these tag operators, we have two basic kinds: those that have content bodies, such as CFThread and CFLock, and those that do not, such as CFParam and CFExit. Service-based tags that have bodies, such as CFMail and CFHTTP, are handles as actual ColdFusion components, not as operators, and will be covered in a later post. To be completely honest, I find the tag operators to be very inconsistent. Some of them use name-value attribute pairs; some use anonymous strings; some can use either attribute pairs or anonymous strings, but not both (ex. property); some can use comment-based, JavaDoc style attribute definitions; some even have function-based alternatives (ex. throw); and some tags don't even have operator or object-based script equivalents (ex. CFContent)! It seems to be all over the map as far as implementation goes. When it comes down to it, I think you simply have to memorize what goes where and how specific tags operators are implemented (if they're implemented at all) - deduction doesn't seem to be an option. That said, let's take a look at the tag operators that do not have content bodies. I won't be covering the CFProperty tag operator as that was discussed in the previous post. As this is really just a lesson in syntax, I'll leave most of my explanation in the comments. - <cfscript> - // Import a component name space for use with the NEW - // opeartor. - // - // NOTE: You cannot use the import operator to import - // tag libraries. - import "demo.com.*"; - // Catch any error that was thrown. We will be catching - // two errors here to demonstrate both the throw and the - // rethrow tag operators. - try { - // Thow an error. This tag cannot use name-value - // attribute pairs. You can only use one anonymous - // string as demonstrate. - // - // NOTE: This has a function-based alternative, - // throw(), that has many more options including - // message, detail, and type arguments: - // throw( type='', message='', detail='' ); - throw "You can't do that!"; - } catch( any error ){ - // Try to catch another error. - try { - // Rethrow the current top-level error. This tag - // operator does not take any attributes. - rethrow; - } catch( any subError ){ - // Nothing to do here... - } - } - // Include another template. This can only use the anonymous - // string, not name-value attribute pairs. - include "include.cfm"; - // Param the given values. This can handle all of the name- - // value attribute pairs that CFParam can handle. - param - name="name" - type="string" - default="Tricia" - ; - // Exit out of this page. - // NOTE: This cannot use name-value attribute pairs, only - // the anonymous string. - exit "exittemplate"; - </cfscript> As you can see, I left a number of comments in the code above as to the limitations or alternatives of the tag operators. Now, let's take a look at the tag operators that have content bodies. Once again, as this is really just a lesson in syntax, I'll leave most of the explanation in the comments. - <cfscript> - // Lock the current temp on the named lock. - lock - scope="application" - type="exlusive" - timeout="10" - { - // Copy the application scope. - request.appData = duplicate( application ); - } - // Start a transaction - // NOTE: We are not going to run any queries since that - // is not relevant at this point. - // NOTE: Trasnaction functionality has greatly increased - // in ColdFusion 9 as well, but can be covered later. - transaction - action="begin" - { - // Run a query. - include "insert_query.cfm"; - // Save this rollback point. - // NOTE: This could have also been accomplished with the - // function-based alternateive: - // transactionSetSavepoint( "SP1" ); - transaction - action="setSavePoint" - savepoint="SP1" - ; - // Run a query. - include "insert_query.cfm"; - // Check to see if something went wrong. - if (false){ - // Roll back the transaction. - // NOTE: This can also be accomplished with the - // function-based laternative: - // transactionRollback(); - transaction action="rollback"; - } - // Roll back to our save point above. - // NOTE: This could have also been accomplished with the - // function-based alternative: - // transactionRollback( "SP1" ); - transaction - action="rollback" - savepoint="SP1" - ; - } - // Launch an asynchronous thread. - thread - action="run" - name="firstThread" - appname="scriptdemo" - { - // Store a return value. - thread.returnValue = "From Thread One"; - } - // Launch a second thread. - thread - action="run" - name="secondThread" - appname="scriptdemo" - { - // Store a return value. - thread.returnValue = "From Thread Two"; - } - // Sleep the current thread. - thread - action="sleep" - duration="10" - ; - // Check to see if our async thread has completed. If it has - // not completed, then terminate it. - if (cfthread.firstThread.status neq "completed"){ - // Terminate the thread. - // NOTE: This could have been accomplished with the - // function-based alternative: - // threadTerminate( "firstThread" ); - thread - action="terminate" - name="firstThread" - ; - } - // Join the second thread to the page. - // NOTE: This could have also been accomplished with the - // function-based alternateive: - // threadJoin( "secondThread" ); - thread - action="join" - name="secondThread" - ; - </cfscript> As you can see above, the tag operators with bodies, namely CFThread and CFTransaction, have many function-based alternatives. I guess these function-based alternatives will also apply to the tag-based versions of these operators; that said, I wonder how Adobe decided which tag actions to turn into functions and which to leave as just operators. All in all, something here feels very inconsistent. I don't foresee myself switching over to CFScript any time - I'm just a tag man at heart. What Other People Are Searching For [ local search ] coldfusion 9 examples [ local search ] coldfusion 9 tutorials [ local search ] coldfusion 9 samples [ local search ] how to use tags in cfscript in coldfusion 9 [ local search ] what are the coldfusion 9 cfscript updates Reader Comments Hey, For me, I am a fan of the work to advance cfscript. I do about 75% of my coding in script so most of this is a welcome addition to me. Having said that, I have to agree with you. The improvements made for CF9 do seem a little inconstant. It is going to be a challenge to remember what to use and where. I think that the beginner is going to find this all to confusing and shy away from it. --Dave @Dave, I guess I shouldn't view the function alternatives as a problem; after all, those can be use with both script and tag-based coding. But one thing that I don't get is why CFLocation is !only! in method format? This never even made it to an operator. Of course, in my previous post, Adam Cameron pointed out that not everything with parenthesis is actually a method call (ie. if(), catch()). But, that doesn't make me feel any better. If that was the case, then how do we know when it's an operator and when its a method...and why some have one, some have the other, and some have both? So ... we've gone from this: <cflock scope="session" timeout="5" type="exclusive"></cflock> ... to this: lock scope="session" timeout="5" type="exclusive" { } That is, we've taken off the angle brackets and replaced an explicit block terminator with an anonymous closing brace. Welcome back to brace-counting Hell, boys. But it was worth it, right? Right? @rick, i totally agree. @Rick, Ha ha ha. I guess now I really need to switch to an editor that has matching brace highlighting! While CF remains very verbose, it will be gaining a slightly more lightweight syntax. One of the issues that disturbs me with Adobe's implementation is the large number of language-level constructs they are introducing, rather than implementing these things as library-level constructs. For example, in Rails, transactions look like: ActiveRecord::Base.transaction do david.withdrawal(100) mary.deposit(100) end 'ActiveRecord::Base' is a class, 'transaction' is a class method on class 'ActiveRecord::Base', and the 'do ... end' syntax creates a code block and passes it to the 'transaction' method. The 'transaction' method internally sets up a transaction, runs the code block passed to it, and either commits the transaction or rolls it back depending on whether the block threw an exception. With NHibernate, transactions look like: using(var txn = nh.BeginTransaction()) { david.Withdrawal(100); mary.deposit(100); txn.Commit(); } 'using' is a syntax element that permits the object defined in the parentheses to run code after the block of code in the braces ends, whether that block of code in the braces ended successfully or threw an error. Neither C# nor Ruby has any language-level concept of transactions. It worries me that CF is introducing a language-level concept of transactions into CFScript, making the CFScript language rather complex and irregular. What Adobe should instead be working on is implementing namespaced library functions to implement such things as threads and transactions. (Language-level function-looking things, such as most of the CF standard functions, are not namespaced library functions but rather are language-level syntax elements.) For example, can I continue to do the following? Or is this CF change a breaking change? var transaction = new lib.MyCustomTransactionClass(1, 2, "three"); transaction.doSomeTransactionyStuff(); @Ben, please note that methods are functions bound to classes or objects. Unattached functions are not methods - this includes all CF standard functions. @Justice, I appreciate your note about methods, but I like to keep things informal around here. As such, I will swap terms like method and function, or class and object and component. I guess it's just my revolting against the fact that in my QBasic class the teacher kept trying to tell me that "subroutines" didn't return values, only "functions" did. Really?? Really?? Well guess what, I like the term FUNCTION better... ok, ok, breathe, that was so many years ago, it's ok, it's over now :) I guess old habits are hard to break. @Justice, What's funny now that I think about it is that it can be completely fluid in ColdFusion. For example, you can take an unattached Function and inject it into a component and bam, you have a Method. So, I guess it really depends on the context, not even so much on what it is. @Ben, you are correct in noting that, in ColdFusion, functions are methods only when they are invoked in a certain way. But functions cannot intrinsically be methods in and of themselves. This is actually how JavaScript works as well: the object that the 'this' keyword in a JavaScript function refers to depends on how the function was invoked. Ben, you mentioned that cfhttp and cfmail are handled as components and you'd cover them later... I'm tying to find documentation on these, and not seeing them in the Adobe beta cfml or dev guide documentation any idea where I can find that? @Ted, No problem, I'll try to cover that this week. I have a question: I'm considering printing Barcodes from CF using a WIndows Mobile device and a Zebra QL420+ mobile printer. Will <cfscript> send form and query data to C# so that I can utilize Zebra's SDK? I haven't tried anything yet. I would like to know is this possible? Thanks, Khalif @Khalif, I am not sure I understand your question. I would assume that any way that ColdFusion tags integrate with non-CF languages would be the same for CFScript (except for perhaps CFImport). How do you currently do this with tags? @Ben Nadel, I'm going to try something in the morning. I will reply with my code attempt. @Khalif, Sounds like a plan. Hey Ben, Sorry for digging up an old post but I haven't been able to find this information anywhere. Is there a CFScript equivalent of cfexecute? Josh @Josh, I don't think there is; but, that would actually be a fun little blog post. Maybe I can whip something up for this.
http://www.bennadel.com/blog/1663-Learning-ColdFusion-9-CFScript-Updates-For-Tag-Operators.htm
CC-MAIN-2013-20
refinedweb
1,976
64.91
Important: Please read the Qt Code of Conduct - Qt 5.2 built with Glib support under OS X - event loop doesn't work Hi, I have built Qt 5.2.1 with Glib support under OS X 10.9. Here's a test code: @ #include <iostream> #include <glib.h> #include <QApplication> using namespace std; static QApplication * qapp; static int idle (void) { cout << "Idle callback called" << endl; cout << "Quitting QApplication" << endl; qapp->quit (); return G_SOURCE_REMOVE; } int main (void) { cout << "Creating QApplication" << endl; int dummy_argc = 0; qapp = new QApplication (dummy_argc, 0); cout << "Adding idle callback" << endl; g_idle_add ((GSourceFunc) idle, 0); cout << "Starting QApplication" << endl; qapp->exec (); cout << "QApplication finished" << endl; cout << "Cleaning up" << endl; delete qapp; qapp = 0; cout << "All done" << endl; return 0; } @ This code works perfectly under Linux, callback is being called. Unfortunately, it doesn't work under OS X... unless you replace QApplication with QCoreApplication, then it works... I want to build a UI interface, so I have to use QApplication. Any idea why it doesn't work as expected? Best regards, Michał EDIT: Same problem with Qt 5.3.0.
https://forum.qt.io/topic/40666/qt-5-2-built-with-glib-support-under-os-x-event-loop-doesn-t-work
CC-MAIN-2020-34
refinedweb
182
62.78
Just found this one online: #include <stdio.h> typedef struct { int ch; int count; } star_t; int main(void) { star_t foo[] = { {46, 22}, {47, 1}, {194, 1}, {180, 1}, {194, 1}, {175, 1}, {47, 1}, {41, 1}, {'\n', 1}, {46, 20}, {44, 1}, {47, 1}, {194, 1}, {175, 1}, {46, 2}, {47, 1}, {'\n', 1}, {46, 19}, {47, 1}, {46, 4}, {47, 1}, {'\n', 1}, {46, 13}, {47, 1}, {194, 1}, {180, 1}, {194, 1}, {175, 1}, {47, 1}, {39, 1}, {46, 3}, {39, 1}, {47, 1}, {194, 1}, {180, 1}, {194, 1}, {175, 1}, {194, 1}, {175, 1}, {96, 1}, {194, 1}, {183, 1}, {194, 1}, {184, 1}, {'\n', 1}, {46, 10}, {47, 1}, {39, 1}, {47, 1}, {46, 3}, {47, 1}, {46, 4}, {47, 1}, {46, 7}, {47, 1}, {194, 1}, {168, 1}, {194, 1}, {175, 1}, {92, 1}, {'\n', 1}, {46, 8}, {40, 1}, {39, 1}, {40, 1}, {46, 3}, {194, 1}, {180, 1}, {46, 3}, {194, 1}, {180, 1}, {46, 4}, {32, 1}, {194, 1}, {175, 1}, {126, 1}, {47, 1}, {39, 1}, {46, 3}, {39, 1}, {41, 1}, {'\n', 1}, {46, 9}, {92, 1}, {46, 17}, {39, 1}, {46, 5}, {47, 1}, {'\n', 1}, {46, 10}, {39, 2}, {46, 3}, {92, 1}, {46, 10}, {32, 1}, {95, 1}, {46, 1}, {194, 1}, {183, 1}, {194, 1}, {180, 1}, {'\n', 1}, {46, 12}, {92, 1}, {46, 14}, {40, 1}, {'\n', 1}, {46, 14}, {92, 1}, {46, 13}, {92, 1}, {46, 1}, {32, 1}, {'\n', 1} }; int i, j; for (i = 0; i < sizeof(foo) / sizeof(foo[0]); i++) { star_t bar = foo[i]; for (j = 0; j < bar.count; j++) { printf("%c", (char )bar.ch); } } return 0; } Discussions Just found this one online: It's not suppose to do anything other than list days. Something that could have been done like this:W. "Billiant" is sarcastic - the code here is less-than-brilliant...far less than brilliant. I wrote the following up as a joke to a friend. Curious if you .NET guys have ever written your own jokes? The following code prints a link for each day of the current month: Day 1, Day 2, etc. The links direct you to another page that can do something with the query string...Enjoy. (Oh, and yes, this IS a joke - so no flaming ) <?php getDays(); function getDays() { $day = array("1","2","3","4","5","6","7","8","9","10","11","12","13", "14","15","16","17","18","19","20","21","22","23","24", "25","26","27","28","29","30","31"); $trueDays = array(); $arrayLinks = array(); for($i = 0; $i < count($day); $i++) { if ($i < date("t") && $i > 0) { if (is_numeric($i)) { $trueDays[] = ($day[$i] > 0) ? $day[$i] : 0 ; } } } if (is_array($trueDays) && 1 == 1) { $dayString = implode(",",$trueDays); } if (strlen($dayString) > 0) { foreach ($trueDays as $day) { $arrayLinks[] = "<a href='getDays.php?day={$day}'>Day {$day}</a>"; } } print "<a href='getDays.php?day=1'>Day 1</a>, " . implode(", ",$arrayLinks); } ?> Ceiling user_name = "jonathansampson" ...but the world ends in 2012...Did London take this into consideration when they decided to host the games?brian.shapiro said:London will have to do better in 2012, W3bbo I don't recall ever experiencing this issue with anything but IE.stevo_ said:ScottWFB said:*snip* display: block usually works.. but if not, use floating.. its not just an IE thing either.. @ ScottWFB, no, I've found no solution. I haven't really noticed it lately though in IE7 - but that doesn't mean it won't creep up on me sometime in the future. - I - fig.
https://channel9.msdn.com/Niners/jsampsonPC/Discussions
CC-MAIN-2016-22
refinedweb
589
84.51
Have an upcoming Ruby on Rails interview for a junior web developer position? Experienced developers at Codementor have gathered together to collaborate on this list of Ruby on Rails interview questions to help you prepare for your Ruby on Rails technical interview. Can you answer them all? The product team has a great new feature they want to add to your Ruby on Rails application: they want every model in the system to be able to retain special user notes. You realize that there will be a collection of forms and model code that will be duplicated in the dozen ActiveRecord models already in your application. What are some strategies you can employ for reducing duplication and bloated Active Record models? What are the pros/cons of each strategy? (Question provided by Jon Lebensold) Because Ruby on Rails is an MVC framework, it can become tempting to try and fit everything into the Model or the Controller. Ruby on Rails is a powerful framework that provides many different mechanisms for describing our application and keeping our models and controllers nice and tidy. Below are two ways of reducing fat models. They illustrate different levels of shared understanding between the extracted functionality and the model in question. ActiveSupport::Concern If the code really belongs in the model (because it relies on ActiveRecord helpers), but there is a coherent grouping of methods, a concern might be worth implementing. For example, many models in a system could enable a user to create a note on a number of models: require 'active_support/concern' module Concerns::Noteable extend ActiveSupport::Concern included do has_many :notes, as: :noteable, dependent: :destroy end def has_simple_notes? notes.not_reminders_or_todos.any? end def has_to_do_notes? notes.to_dos.any? end def has_reminder_notes? notes.reminders.any? end ... end The Concern can then be applied like so: class Language < ActiveRecord::Base include TryFind include Concerns::Noteable end Pros: This is a great way of testing a cohesive piece of functionality and making it clear to other developers that these methods belong together. Unit tests can also operate on a test double or a stub, which will keep functionality as decoupled from the remaining model implementation as possible. Cons: ActiveSupport::Concerns can be a bit controversial. When they are over-used, the model becomes peppered in multiple files and it’s possible for multiple concerns to have clashing implementations. A concern is still fundamentally coupled to Rails. See also: Depending on the source of the bloat, sometimes it makes better sense to delegate to a service class. 10 lines of validation code can be wrapped up in a custom validator and tucked away in app/validators. Transformation of form parameters can be placed in a custom form under app/forms. If you have custom business logic, it may be prudent to keep it in a lib/ folder until it’s well defined. The beauty of delegation is that the service classes will have no knowledge of the business domain and can be safely refactored and tested without any knowledge of the models. Pros: This approach is elegant and builds a custom library on top of what Ruby on Rails provides out of the box. Cons: If the underlying APIs change, your code will likely need to be updated to match. Instead of coupling to your model layer, you’ve now coupled yourself to either Ruby on Rails or a third-party library. See also: Conclusion : This question helps demonstrate two critical skills every Ruby developer needs to develop: how handle complexity from emerging requirements and how to decide the most appropriate refactoring. By working through different refactoring strategies, I can explore a candidate’s problem solving skills and their overall familiarity with Ruby on Rails and their knowledge of MVC. It’s important to know what is code that is specific to the application and what can be generalized into a completely decoupled piece of functionality. Having spent over a decade tapping away at a terminal, Jon has worked on large systems for Open Source projects, Fortune 500s, and non-profit organizations. Hire Jon Now. Given an array [1,2,34,5,6,7,8,9], sum it up using a method: (Question provided by Codementor Nicholas Ng) def sum(array) return array.inject(:+) end Summation of an array is one of the most fundamental concepts in programming, and there are a lot of methods to achieve it, such as iterating the array and summing the numbers. In Ruby, it’s neat to know there is method that called inject, because it’s so powerful yet simple. What is metaprogramming? (Question provided by Codementor Nicholas Ng) Ruby developers should know what’s metaprogramming, because it is widely used, especially in popular frameworks such as Rails, Sinatra, and Padrino. By using metaprogramming, we can reduce duplicate code, but there is a downside where it will increase the complexity of the code in the long run. Here’s an example of metaprogramming in Ruby: A user can have a lot of roles, and you want to check the authorization. Normal scenario: def admin? role == 'admin' end def marketer? role == 'marketer' end def sales? role == 'sales' end Metaprogramming: ['admin', 'marketer', 'sales'].each do |user_role| define_method "#{user_role}?" do role == user_role end end Given this Human class implementation (Question provided by Codementor Nicholas Ng) class Human def talk puts "I’m talking" end private def whisper puts "I’m whispering" end end What’s the output of : Human.new.talk Human.new.whisper Human.new.send(:talk) Human.new.send(:whisper) I’m talking NoMethodError: private method ‘whisper’ called for #<Human:0x007fd97b292d48> I’m talking I’m whispering To explain, the class object Human.new.talk is calling an instance method, so it works perfectly. The talk method is a public method and can be called by everyone. The class object Human.new.whisper is trying to access a private method, which it is not allowed to. Private and Protected methods are not accessible from outside. They are only used internally. This is an object-oriented design and can be used to structure the code, which the implementation is not supposed to expose to consumer object. Finally, Human.new.send(:talk) sends a bypass control check to the method so it can be called without raising an error. Same goes to Human.new.send(:whisper). Nicholas Ng is the technical Lead in PropSocial and the Founder of Virtual Spirit, a Ruby on Rails tech firm that provides web & mobile development. Hire Nicholas Now. Write code that splits a given array of integers into two arrays; the first containing odd numbers and second containing even numbers (Question provided by anonymous Codementor) %i|select reject|.map { |m| array.partition(&odd?)} or array.each_with_object({odd:[], even:[]}) do |elem, memo| memo[elem.odd? ? :odd : :even] << elem end # inject for ruby < 2.0 is fine as well The straightforward approach would be to call array.select to store odds and then array.reject to store evens. There is nothing wrong with this, except it violates DRY principle: odds = array.select &:odd? evens = array.reject &:odd? Future modification of this code might accidentally change the only one line from the above pair, breaking consistency. It is not likely the case for this particular example, but DRY usually works for future use. One might notice that the latter example does one iteration through an array, while the former is still iterating the array twice, once for each method. In most cases, the performance penalty is not significant, but it should be taken into consideration when dealing with huge arrays. CodementorX will help you find you the best engineers for your project. How would you flatten (Question provided by anonymous Codementor) hash = { a: { b: { c: 42, d: 'foo' }, d: 'bar' }, e: 'baz' } to { :a_b_c=>42, :a_b_d=>"foo", :a_d=>"bar", :e=>"baz" } λ = ->(h, key = nil) do h.map do |k, v| _k = key ? "#{key}_#{k}" : k.to_s v.is_a?(Hash) ? λ.call(v, _k) : [_k.to_sym, v] end.flatten #⇒ note flatten end Hash[*λ.call(hash)] #⇒ {:a_b_c=>42, :a_b_d=>"foo", :a_d=>"bar", :e=>"baz"} Understanding recursion is important. While all these Fibonacci numbers and factorials are repeated over and over again, the real world tasks are less academic. Walking through the hash in the described manner is often a must. The exercise might be stated as the exact opposite: given the “flattened” form of the hash, rebuild its nested representation. Given the following syntactic sugar: (Question provided by anonymous Codementor) (1..42).reduce &:* #⇒ 1405006117752879898543142606244511569936384000000000 What makes this notation to be an equivalent of (1..42).reduce { |memo, elem| memo * elem } Does the Ruby parser handle this particular case, or could this be implemented in plain Ruby? The candidate can monkeypatch to the Symbol class with their own implementation of the aforementioned syntactic sugar. E.g. class Symbol def to_proc # this implementation in incomplete # — more sophisticated question: why? # — even more hard: re-implement it properly lambda { |memo, recv| memo.public_send(self, recv) } end end (1..42).reduce &:* #⇒ 1405006117752879898543142606244511569936384000000000 There is not much magic in this example. An ampersand converts an argument, that is apparently a Symbol instance here, to Proc object, simply calling to_proc method on it. The implementation is Symbol#to_proc in done in C for performance reasons, but the candidate can write their own implementation in Ruby to make sure everything works as expected. Answers to additional questions in the code: — the code above fails when the callee expects to receive one parameter only (e.g. a try to use this implementation with Enumerator#each) will fail with an arity error: (1..42).each &:* ArgumentError: wrong number of arguments (given 1, expected 2) To fix this, the candidate should use the splat argument to lambda and analyze the amount of actual argument passed: lambda do |*args| case args.size when 1 then # each-like when 2 then # with accumulator ... What’s wrong with the code below and why? (Question provided by anonymous Codementor) require 'benchmark' puts Benchmark.measure do break if Random.rand(100) === 1 while true end Operator precedence matters! The code will return LocalJumpError: no block given (yield) As do-end is weaker than attracting arguments to the function, that’s why one either needs to surround the whole call to Benchmark.measure with parentheses, or to use curly brackets instead of do-end. Provided we have a hash of a fixed structure (e. g. we receive this hash from a third-party data provider, that guarantees the structure): (Question provided by anonymous Codementor) input = [ {a1: 42, b1: { c1: 'foo' }}, {a2: 43, b2: { c2: 'bar' }}, {a3: 44, b3: { c3: 'baz' }}, … ] How can one build an array of cN values (['foo', bar', baz'])? Some examples would be input.map { |v| v.to_a.last.last.to_a.last.last } or input.map { |v| v.flatten.last.flatten.last } Here one iterates through the array, collecting nested hashes and using an index to build the requested key name, but there is more straightforward approach. Hash is an Enumerable, which gives the developer an ability to query it almost as what has been done with an array. Given the code below, how one might access the @foo variable from outside? Is it an instance variable, or class variable? What object this variable is defined on? (Question provided by anonymous Codementor) class Foo class << self @foo = 42 end end You can access the variable with (class << Foo ; @foo ; end) It’s an instance variable and defined on Foo’s Singleton method, or more specifically, the eigenclass of Foo. Each class in Ruby has its own “eigenclass.” This eigenclass is derived from Classclass. Foo class is an instance of it’s eigenclass. This eigenclass has the only instance Foo; it as well has instance methods, defined as class methods on Foo, and it might have instance variables, e. g. @foo variable in this particular example. What are the different uses of Ruby modules? Could you provide an example of each and explain why it is valuable? (Question provided by anonymous Codementor) Traits/Mixins: Examples: Mappable, Renderable, Movable Traits/Mixins is a useful alternative to class inheritance when there is a need to acquire behavior that describes a trait (e.g. Renderable) instead of an is-a relationship (e.g. Vehicle), especially when there is a need for multiple traits since a class can only inherit once. Namespace: Examples: Graphics, Devise, Finance Namespace Ruby classes and modules to avoid naming clashes with similarly-named classes/modules from libraries Singleton class alternative: Examples: Application, Universe, Game Modules cannot be instantiated, therefore they can be used as an easy alternative to singleton classes to represent only one instance of a domain model via module methods (equivalent of class methods) Bag of stateless helper methods: Examples: Helpers, Math, Physics Stateless helper methods receive their data via arguments without needing a class to be instantiated nor keep any state (e.g. calculate_interest(amount, rate_per_year, period)), so a module is used instead for holding a bag of stateless helper methods. In addition to knowing the 4 different functions of modules in Ruby cited above, it’s important to know when to use a module v.s. a superclass when doing object-oriented domain modeling since that can greatly impact maintenance of the code a few months down the road in a software project. For a further practical example, a car-race-betting game allows players to bet on cars rendered on the screen as moving at different speeds. When the game is over, players can print a sheet of game results representing each car’s details and status to claim their prize at a casino. If the candidate were to implement all of these details in a car object, and later introduce differences between a Jaguar, Mercedez, and Porche, the candidate would rely on a car superclass shared among three subclasses via inheritance. However, if in the future, trucks and horses are thrown into the mix as well, the candidate would have to split the ability of an object to move, the ability to render object on screen, and the ability to print object details into separate modules (Movable, Renderable, and Printable respectively), and mix them into each of Car, Truck, and Horse. Next, the three classes can each serve as a superclass for multiple subtypes as needed by the game (e.g. JaguarCar, FireTruck, ThoroughbredHorse). This will offer maximum separation of concerns and improved maintainability of the code. Hope these 11 Ruby on Rails interview questions help you find and vet a Ruby developer's technical skill. What's more important is to also review a candidate's past projects, how the candidate goes about solving issues previously faced, as well as other soft skills. By no means is this an exhaustive list of interview questions, but these are some ones to help you get started. Let us know below if you know any other Ruby on Rails interview questions!
https://www.codementor.io/blog/ruby-on-rails-interview-questions-du107w0ss
CC-MAIN-2018-05
refinedweb
2,482
54.32
How To Add Google Maps in Flutter Displaying maps is a core functionality of many mobile apps. Let’s see how we can implement maps in Flutter apps in order to better the user experience. Join the DZone community and get the full member experience.Join For Free Introduction The Flutter ecosystem is flourishing and will definitely make a huge mark in the near future as one of the most established cross-platform mobile application development frameworks. The community is rapidly growing and there are already many powerful libraries available already. In this tutorial, we are going to make use of the googler_maps_flutter package in order to display maps in our Flutter apps. Maps are used in applications that deal with navigation and delivery such as Yelp or UberEats, shopping applications, geolocation apps, etc. We can use it to show locations, track locations, real-time navigation, etc. In this tutorial, we are going to use the Google Maps API from the Google Developer Console. The idea is to integrate the Google API key equipped with Android Google map SDK to our Flutter project. Then, we will use the mentioned package to show the map on the app screen. We will also work with markers and their customizations. So, let’s get started! Create a New Flutter Project First, we need to create a new Flutter project. For that, make sure that the Flutter SDK and other Flutter app development-related requirements are properly installed. If everything is properly set up, then in order to create a project, we can simply run the following command in the desired local directory: flutter create googleMapExample After the project has been set up, we can navigate inside the project directory and execute the following command in the terminal to run the project in either an available emulator or an actual device: xxxxxxxxxx flutter run After successfully building, we will get the following result in the emulator screen: Scaffolding the Flutter Project Now, we need to replace the default template with our own project structure template. First, we need to create a folder called ./screens inside the ./lib folder. Then, inside the ./lib/screens folder, we need to create a new file called Home.dart. Inside the Home.dart, we are going to implement a simple Stateful widget class returning a Scaffold widget with a basic App bar and an empty Container body. The code for Home.dart is shown in the code snippet below: xxxxxxxxxx import 'package:flutter/material.dart'; import 'package:flutter/widgets.dart'; class HomePage extends StatefulWidget { _HomePageState createState() => _HomePageState(); } class _HomePageState extends State<HomePage> { Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text("Maps"), centerTitle: true, ), body: Container( child: Center( child: Text("Maps"), ), ), ); } } Now, we need to replace the default template in the main.dart file and call the HomePage screen in the home option of MaterialApp widget as shown in the code snippet below: xxxxxxxxxx import 'package:flutter/material.dart'; import 'package:mapExample/screens/Home.dart'; void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { // This widget is the root of your application. Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', debugShowCheckedModeBanner: false, theme: ThemeData( primarySwatch: Colors.blue, visualDensity: VisualDensity.adaptivePlatformDensity, ), home: HomePage(), ); } } Hence, we will get the screen as shown in the screenshot below: Getting The API Key for Google Maps Now, we need the API key for Google Maps in order to access the map in our application. The process to get the API key is easy. We need to go on and login to Google Developer Console at. In the developer console, we need to create a project if we don’t have one already, which will lead us to the following screen: In the above screenshot, we are in the Credentials view of our project. In Credentials, we can create the API key and restrict it for the particular service, which in our case is Android Google Maps SDK. When we click on “+ CREATE CREDENTIALS”, we will get the options as shown in the screenshot below: From the options, we need to click on “API key”, which will lead us to the following dialog: The dialog states that the API key is created. It means we have the API key now but the services to be provided by the API key are yet not enabled. Hence, we will get the API key without any service as shown in the code snippet below: Since our API key is not restricted to any services, we need to apply some restrictions to it. But before that, we need to ensure that the “Google Maps Android API v2” service is enabled. Then, we can restrict the API key to Maps SDK for Android by navigating inside API key1 as shown in the screenshot below: After selecting the service that we require, we need to click on ‘SAVE’ to save the configurations to the API key. Then, we will get the API key with Google Maps SDK for Android enabled as shown in the screenshot below: Now, our API key is enabled and ready for use. Adding the API Key to our Flutter App In order to add the Android Maps API key to the Flutter project, we need to first go to the AndroidManifest.xml file. In our AndroidManifest.xml file, we need to add the following code as a child of the application element: xxxxxxxxxx <manifest ... <application ... <meta-data android:name="com.google.android.geo.API_KEY" android: Here, we need to replace the value in android:value with the API key that we generated in the above steps. Make sure you use your own API key! Installing Google Maps Flutter Plugin Now, we are going to use the package called google_maps_flutter. This plugin provides the GoogleMap widget to be used in our Flutter project to show the map on the screen. The widget houses many properties that allow us to tamper with the displayed map and customize it accordingly. Now, to install this plugin, we need to add the plugin in our pubspec.yaml file as shown in the code snippet below: xxxxxxxxxx dependencies google_maps_flutter^1.2.0 Adding Maps To the Home Screen Now, we are going to add the GoogleMap widget to our Home screen to display the map on the screen. For this, we need to initialize the GoogleMapController that handles the loading state of the map. Then, we need to define the CameraPosition that determines which location the map has to show in. In CameraPosition class, we can assign the target value which is latitude and longitude. We can also specify the zoom value which will determine how much the camera is to be zoomed into the map. Then, we need to use the GoogleMap widget with all the required properties configured as shown in the code snippet below: xxxxxxxxxx class _HomePageState extends State<HomePage> { Completer<GoogleMapController> _controller = Completer(); static final CameraPosition _initialCameraPosition = CameraPosition( target: LatLng(37.42796133580664, -122.085749655962), zoom: 15, ); Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text("Maps"), centerTitle: true, ), body: GoogleMap( mapType: MapType.normal, initialCameraPosition: _initialCameraPosition, onMapCreated: (GoogleMapController controller) { _controller.complete(controller); }, ), ); } } Here, using the mapType option on the GoogleMap widget, we can choose what type of map to show. It can be a satellite image, a roadmap, or a normal map. The onMapCreated event will be created after the map has been placed on the screen. In this event, we can choose to add the map markers which we are going to do later. The resultant map is shown in the screenshot below: Adding Map Markers Now we are going to add a marker to our map. The markers help us show a specific location on the map. In order to set the markers, we need to initialize a state that holds the markers. Then, we are going to create a function that will be triggered in the onMapCreated event. After the map is created, we will add the markers to the screen. To add markers, we have access to the Marker widget which will take a markerId and position value (latitude and longitude). The markers initialization is provided in the code snippet below: xxxxxxxxxx Set<Marker> _markers = {}; void _onMapCreated (GoogleMapController controller) { setState(() { _markers.add( Marker( markerId: MarkerId("id-1"), position: LatLng(37.42796133580664, -122.085749655962) ) ); }); } Now, we need to set the markers property in GoogleMap widget and also assign the _onMapCreated function in the onMapCreated event as shown in the code snippet below: xxxxxxxxxx body: GoogleMap( mapType: MapType.normal, initialCameraPosition: _initialCameraPosition, onMapCreated: _onMapCreated, markers: _markers, ), Hence, we will get the marker on the map as shown in the screenshot below: Adding Info To the Map Markers We can also customize our markers based on our own needs. We can add information to it as well. The information will be shown once we click on the marker on the map. In order to add the marker info, we need to add infoWindow property to the Marker widget. Then, we need to use the InfoWindow widget to add extra information to the marker as shown in the code snippet below: xxxxxxxxxx void _onMapCreated (GoogleMapController controller) { setState(() { _markers.add( Marker( markerId: MarkerId("id-1"), position: LatLng(37.42796133580664, -122.085749655962), infoWindow: InfoWindow( title: "GooglePlex" ) ) ); }); } Now once we click on the marker, we will see the information as shown in the screenshot below: Not only that, we can add custom icons and other elements to the marker which is pretty easy as well. Finally, we have successfully integrated Google Maps into our Flutter map project using Google Maps SDK API key from Google Developer Console and google_maps_flutter package. Conclusion The main objective of this Flutter tutorial is to explore the integration of Google Maps in the Flutter environment. Google Maps is a powerful geo-navigation service provided by Google which can be integrated into almost any technology nowadays. In this case, we made use of Google API restricted to Android Google Maps SDK to integrate Google Maps into our Flutter project. Then, in order to show the actual map on the UI of the mobile app, we made use of the latest package available which is google_maps_flutter. The configurations were easy and simple. The map was set up with the help of a simple GoogleMap widget and some basic configurations. In the end, we were also able to add some markers to the map. A lot of other things are possible with this widget which is a challenge for you to explore. If you found this tutorial on how to add maps to Flutter apps, please consider spreading the word, by sharing the link further. Cheers! Published at DZone with permission of krissanawatt kaewsanmunag. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/how-to-add-google-maps-in-flutter
CC-MAIN-2021-17
refinedweb
1,787
61.26
Most of the modern languages and frameworks used to present a to-do list as their sample app. It is a great way to understand the basics of a framework as user interaction, basic navigation, or how to structure code. We'll start in a more pragmatic way: building a shopping list app. You will be able to develop this app in React Native code, build it for both iOS and Android, and finally install it on your phone. This way, you could not only show your friends what you built, but also understand missing features that you can build by yourself, thinking about user-interface improvements, and above all, motivating yourself to keep learning React Native as you feel its true potential. By the end of this chapter, you will have built a fully-functional shopping list that you can use on your phone and will have all the tools you need to create and maintain simple stateful apps. One of the most powerful features of React Native is its cross-platform capabilities; we will build our shopping list app for both iOS and Android, reusing 99% of our code. Let's take a look at how the app will look on both platforms: iOS: After adding more products, this is how it will look: Android:  After adding more products, this is how it will look: The app will have a very similar user interface on both platforms, but we won't need to care much about the differences (for example, the back button on the Add a product screen), as they will be handled automatically by React Native. It is important to understand that each platform has its own user interface patterns, and it's a good practice to follow them. For example, navigation is usually handled through tabs in iOS while Android prefers a drawer menu, so we should build both navigation patterns if we want happy users on both platforms. In any case, this is only a recommendation, and any user interface pattern could be built on every platform. In later chapters, we will see how to handle two different patterns in the most effective way within the same codebase. The app comprises of two screens: your shopping list and a list of the products which could be added to your shopping list. The user can navigate from the Shopping List screen to the Add a product screen through the round blue button and back through the < Back button. We will also build a clear button in the shopping list screen (the round red button) and the ability to add and remove products on the Add a product screen. We will be covering the following topics in this chapter: - Folder structure for a basic React Native project - React Native's basic CLI commands - Basic navigation - JS debugging - Live reloading - Styling with NativeBase - Lists - Basic state management - Handling events AsyncStorage - Prompt popups - Distributing the app React Native has a very powerful CLI that we will need to install to get started with our project. To install, just run the following command in your command line (you might need to run this with sudo), if you don't have enough permissions: npm install -g react-native-cli Once the installation is finished, we can start using the React Native CLI by typing react-native. To start our project, we will run the following command: react-native init --version="0.49.3"GroceriesList This command will create a basic project named GroceriesList with all the dependencies and libraries you need to build the app on iOS and Android. Once the CLI has finished installing all the packages, you should have a folder structure similar to this: The entry file for our project is index.js. If you want to see your initial app running on a simulator, you can use React Native's CLI again: react-native run-ios Or react-native run-android Provided you have XCode or Android Studio and Android Simulator installed, you should be able to see a sample screen on your simulator after compilation: We have everything we need to set up to start implementing our app, but in order to easily debug and see our changes in the simulator, we need to enable two more features: remote JS debugging and live reloading. For debugging, we will use React Native Debugger, a standalone app, based on the official debugger for React Native, which includes React Inspector and Redux DevTools. It can be downloaded following the instructions on its GitHub repository (). For this debugger to work properly, we will need to enable Remote JS Debugging from within our app by opening a React Native development menu within the simulator by pressing command + ctrl + Z on iOS or command + M on Android. If everything goes well, we should see the following menu appear:   Now, we will press two buttons: Debug Remote JS and Enable Live Reload. Once we are done with this, we have all our development environment up and ready to start writing React code. Our app only comprises of two screens: Shopping List and Add Products. Since the state for such a simple app should be easy to manage, we won't add any library for state management (for example, Redux), as we will send the shared state through the navigation component. This should make our folder structure rather simple: We have to create an src folder where we will store all our React code. The self-created file index.js will have the following code: /*** index.js ***/ import { AppRegistry } from 'react-native'; import App from './src/main'; AppRegistry.registerComponent('GroceriesList', () => App); In short, these files will import the common root code for our app, store it in a variable named App and later pass this variable to the AppRegistry through the registerComponent method. AppRegistry is the component to which we should register our root components. Once we do this, React Native will generate a JS bundle for our app and then run the app when it's ready by invoking AppRegistry.runApplication. Most of the code we will be writing, will be placed inside the src folder. For this app, we will create our root component ( main.js) in this folder, and a screens subfolder, in which we will store our two screens ( ShoppingList and AddProduct). Now let's install all the initial dependencies for our app before continue coding. In our project's root folder, we will need to run the following command: npm install Running that command will install all the basic dependencies for every React Native project. Let's now install the three packages we will be using for this specific app: npm install native-base --save npm install react-native-prompt-android --save npm install react-navigation --save Further ahead in this chapter, we will explain what each package will be used for. Most mobile apps comprise of more than one screen, so we will need to be able to "travel" between those screens. In order to achieve this, we will need a Navigation component. React Native comes with a Navigator and a NavigatorIOS component out of the box, although the React maintainers recommend using an external navigation solution built by the community named react-navigation (), which is very performant, well maintained, and rich in features, so we will use it for our app. Because we already installed our module for navigation ( react-navigation), we can set up and initialize our Navigation component inside our main.js file: /*** src/main.js ***/ import React from 'react'; import { StackNavigator } from 'react-navigation'; import ShoppingList from './screens/ShoppingList.js'; import AddProduct from './screens/AddProduct.js'; const Navigator = StackNavigator({ ShoppingList: { screen: ShoppingList }, AddProduct: { screen: AddProduct } }); export default class App extends React.Component { constructor() { super(); } render() { return <Navigator />; } } Our root component imports both of the screens in our app ( ShoppingList and AddProduct) and passes them to the StackNavigator function, which generates the Navigator component. Let's take a deeper look into how StackNavigator works. StackNavigator provides a way for any app to transition between screens, where each new screen is placed on top of a stack. When we request the navigation to a new screen, StackNavigator will slide the new screen from the right and place a < Back button in the upper-right corner to go back to the previous screen in iOS or, will fade in from the bottom while a new screen is placing a <- arrow to go back in Android. With the same codebase, we will trigger familiar navigation patterns in iOS and Android. StackNavigator is also really simple to use, as we only need to pass the screens in our apps as a hash map, where the keys are the names we want for our screens and the values are the imported screens as React components. The result is a <Navigator/> component which we can render to initialize our app. React Native includes a powerful way to style our components and screens using Flexbox and a CSS-like API but, for this app, we want to focus on the functionality aspect, so we will use a library including basic styled components as buttons, lists, icons, menus, forms, and many more. It can be seen as a Twitter Bootstrap for React Native. There are several popular UI libraries, NativeBase and React Native elements being the two most popular and best supported. Out of these two, we will choose NativeBase, since it's documentation is slightly clearer for beginners. You can find the detailed documentation on how NativeBase works on their website (), but we will go through the basics of installing and using some of their components in this chapter. We previously installed native-base as a dependency of our project through npm install but NativeBase includes some peer dependencies, which need to be linked and included in our iOS and Android native folders. Luckily, React Native already has a tool for finding out those dependencies and linking them; we just need to run: react-native link At this point, we have all the UI components from NativeBase fully available in our app. So, we can start building our first screen. Our first screen will contain a list of the items we need to buy, so it will contain one list item per item we need to buy, including a button to mark that item as already bought. Moreover, we need a button to navigate to the AddProduct screen, which will allow us to add products to our list. Finally, we will add a button to clear the list of products, in case we want to start a new shopping list: Let's start by creating ShoppingList.js inside the screens folder and importing all the UI components we will need from native-base and react-native (we will use an alert popup to warn the user before clearing all items). The main UI components we will be using are Fab (the blue and red round buttons), List, ListItem, CheckBox, Text, and Icon. To support our layout, we will be using Body, Container, Content, and Right, which are layout containers for the rest of our components. Having all these components, we can create a simple version of our ShoppingList component: /*** ShoppingList.js ***/ import React from 'react'; import { Alert } from 'react-native'; import { Body, Container, Content, Right, Text, CheckBox, List, ListItem, Fab, Icon } from 'native-base'; export default class ShoppingList extends React.Component { static navigationOptions = { title: 'My Groceries List' }; /*** Render ***/ render() { return ( <Container> <Content> <List> <ListItem> <Body> <Text>'Name of the product'</Text> </Body> <Right> <CheckBox checked={false} /> </Right> </ListItem> </List> </Content> <Fab style={{ backgroundColor: '#5067FF' }} <Icon name="add" /> </Fab> <Fab style={{ backgroundColor: 'red' }} <Icon ios="ios-remove" android="md-remove" /> </Fab> </Container> ); } } This is just a dumb component statically displaying the components we will be using on this screen. Some things to note: navigationOptionsis a static attribute which will be used by <Navigator>to configure how the navigation would behave. In our case, we want to display My Groceries List as the title for this screen. - For native-baseto do its magic, we need to use <Container>and <Content>to properly form the layout. Fabbuttons are placed outside <Content>, so they can float over the left and right-bottom corners. - Each ListItemcontains a <Body>(main text) and a <Right>(icons aligned to the right). Since we enabled Live Reload in our first steps, we should see the app reloading after saving our newly created file. All the UI elements are now in place, but they are not functional since we didn't add any state. This should be our next step. Let's add some initial state to our ShoppingList screen to populate the list with actual dynamic data. We will start by creating a constructor and setting the initial state there: /*** ShoppingList.js ***/ ... constructor(props) { super(props); this.state = { products: [{ id: 1, name: 'bread' }, { id: 2, name: 'eggs' }] }; } ... Now, we can render that state inside of <List> (inside the render method): /*** ShoppingList.js ***/ ... <List> { this.state.products.map(p => { return ( <ListItem key={p.id} > <Body> <Text style={{ color: p.gotten ? '#bbb' : '#000' }}> {p.name} </Text> </Body> <Right> <CheckBox checked={p.gotten} /> </Right> </ListItem> ); } )} </List> ... We now rely on a list of products inside our component's state, each product storing an id, a name, and gotten properties. When modifying this state, we will automatically be re-rendering the list. Now, it's time to add some event handlers, so we can modify the state at the users' command or navigate to the AddProduct screen. All the interaction with the user will happen through event handlers in React Native. Depending on the controller, we will have different events which can be triggered. The most common event is onPress, as it will be triggered every time we push a button, a checkbox, or a view in general. Let's add some onPress handlers for all the components which can be pushed in our screen: /*** ShoppingList.js ***/ ... render() { return ( <Container> <Content> <List> {this.state.products.map(p => { return ( <ListItem key={p.id} onPress={this._handleProductPress.bind(this, p)} > <Body> <Text style={{ color: p.gotten ? '#bbb' : '#000' }}> {p.name} </Text> </Body> <Right> <CheckBox checked={p.gotten} onPress={this._handleProductPress.bind(this, p)} /> </Right> </ListItem> ); })} </List> </Content> <Fab style={{ backgroundColor: '#5067FF' }} </Fab> <Fab style={{ backgroundColor: 'red' }} </Fab> </Container> ); } ... Notice we added three onPress event handlers: - On <ListItem>, to react when the user taps on one product in the list - On <CheckBox>, to react when the user taps on the checkbox icon next to every product in the list - On both the <Fab>buttons If you know React, you probably understand why we use .bind in all our handler functions, but, in case you have doubts, .bind will make sure we can use this inside the definition of our handlers as a reference to the component itself instead of the global scope. This will allow us to call methods inside our components as this.setState or read our component's attributes, such as this.props and this.state. For the cases when the user taps on a specific product, we also bind the product itself, so we can use them inside our event handlers. Now, let's define the functions which will serve as event handlers: /*** ShoppingList.js ***/ ... _handleProductPress(product) { this.state.products.forEach(p => { if (product.id === p.id) { p.gotten = !p.gotten; } return p; }); this.setState({ products: this.state.products }); } ... First, let's create a handler for when the user taps on a product from our shopping list or in its checkbox. We want to mark the product as gotten (or unmark it if it was already gotten), so we will update the state with the product marked properly. Next, we will add a handler for the blue <Fab> button to navigate to the AddProduct screen: /*** ShoppingList.js ***/ ... _handleAddProductPress() { this.props.navigation.navigate('AddProduct', { addProduct: product => { this.setState({ products: this.state.products.concat(product) }); }, deleteProduct: product => { this.setState({ products: this.state.products.filter(p => p.id !== product.id) }); }, productsInList: this.state.products }); } ... This handler uses this.props.navigation, which is a property automatically passed by the Navigator component from react-navigation. This property contains a method named navigate, receiving the name of the screen to which the app should navigate plus an object which can be used as a global state. In the case of this app, we will store three keys: addProduct: One function to allow the AddProductscreen to modify the ShoppingListcomponent's state to reflect the action of adding a new product to the shopping list. deleteProduct: One function to allow the AddProductscreen to modify the ShoppingListcomponent's state to reflect the action of removing a product from the shopping list. productsInList: A variable holding the list of products is already on the shopping list, so the AddProductsscreen can know which products were already added to the shopping list and display those as "already added", preventing the addition of duplicate items. Handling state within the navigation should be seen as a workaround for simple apps containing a limited number of screens. In larger apps (as we will see in later chapters), a state management library, such as Redux or MobX, should be used to keep the separation between pure data and user interface handling. We will add the last handler for the blue <Fab> button, which enables the user to clear all the items in the shopping list in case you want to start a new list: /*** ShoppingList.js ***/ ... _handleClearPress() { Alert.alert('Clear all items?', null, [ { text: 'Cancel' }, { text: 'Ok', onPress: () => this.setState({ products: [] }) } ]); } ... We are using Alert to prompt the user for confirmation before clearing all the elements in our shopping list. Once the user confirms this action, we will empty the products attribute in our component's state. Let's see how the whole component's structure would look like when putting all the methods together: /*** ShoppingList.js ***/ import React from 'react'; import { Alert } from 'react-native'; import { ... } from 'native-base'; export default class ShoppingList extends React.Component { static navigationOptions = { title: 'My Groceries List' }; constructor(props) { ... } /*** User Actions Handlers ***/ _handleProductPress(product) { ... } _handleAddProductPress() { ... } _handleClearPress() { ... } /*** Render ***/ render() { ... } } The structure of a React Native component is very similar to a normal React component. We need to import React itself and then some components to build up our screen. We also have several event handlers (which we have prefixed with an underscore as a mere convention) and finally a render method to display our components using standard JSX. The only difference with a React web app is the fact that we are using React Native UI components instead of DOM components. As the user will have the need of adding new products to the shopping list, we need to build a screen in which we can prompt the user for the name of the product to be added and save it in the phone's storage for later use. When building a React Native app, it's important to understand how mobile devices handle the memory used by each app. Our app will be sharing the memory with the rest of the apps in the device so, eventually, the memory which is using our app will be claimed by a different app. Therefore, we cannot rely on putting data in memory for later use. In case we want to make sure the data is available across users of our app, we need to store that data in the device's persistent storage. React Native offers an API to handle the communication with the persistent storage in our mobile devices and this API is the same on iOS and Android, so we can write cross-platform code comfortably. The API is named AsyncStorage, and we can use it after importing from React Native: import { AsyncStorage } from 'react-native'; We will only use two methods from AsyncStorage: getItem and setItem. For example, we will create within our screen a local function to handle the addition of a product to the full list of products: /*** AddProduct ***/ ... async addNewProduct(name) { const newProductsList = this.state.allProducts.concat({ name: name, id: Math.floor(Math.random() * 100000) }); await AsyncStorage.setItem( '@allProducts', JSON.stringify(newProductsList) ); this.setState({ allProducts: newProductsList }); } ... There are some interesting things to note here: - We are using ES7 features such as asyncand await to handle asynchronous calls instead of promises or callbacks. Understanding ES7 is outside the scope of this book, but it is recommended to learn and understand about the use of asyncand await, as it's a very powerful feature we will be using extensively throughout this book. - Every time we add a product to allProducts, we also call AsyncStorage.setItemto permanently store the product in our device's storage. This action ensures that the products added by the user will be available even when the operating system clears the memory used by our app. - We need to pass two parameters to setItem(and also to getItem): a key and a value. Both of them must be strings, so we would need to use JSON.stringify, if we want to store the JSON-formatted data. As we have just seen, we will be using an attribute in our component's state named allProducts, which will contain the full list of products the user can add to the shopping list. We can initialize this state inside the component's constructor to give the user a gist of what he/she will be seeing on this screen even during the first run of the app (this is a trick used by many modern apps to onboard users by faking a used state): /*** AddProduct.js ***/ ... constructor(props) { super(props); this.state = { allProducts: [ { id: 1, name: 'bread' }, { id: 2, name: 'eggs' }, { id: 3, name: 'paper towels' }, { id: 4, name: 'milk' } ], productsInList: [] }; } ... Besides allProducts, we will also have a productsInList array, holding all the products which are already added to the current shopping list. This will allow us to mark the product as Already in shopping list, preventing the user from trying to add the same product twice in the list. This constructor will be very useful for our app's first run but once the user has added products (and therefore saved them in persistent storage), we want those products to display instead of this test data. In order to achieve this functionality, we should read the saved products from AsyncStorage and set it as the initial allProducts value in our state. We will do this on componentWillMount: /*** AddProduct.js ***/ ... async componentWillMount() { const savedProducts = await AsyncStorage.getItem('@allProducts'); if(savedProducts) { this.setState({ allProducts: JSON.parse(savedProducts) }); } this.setState({ productsInList: this.props.navigation.state.params.productsInList }); } ... We are updating the state once the screen is ready to be mounted. First, we will update the allProducts value by reading it from the persistent storage. Then, we will update the list productsInList based on what the ShoppingList screen has set as the state in the navigation property. With this state, we can build our list of products, which can be added to the shopping list: /*** AddProduct ***/ ... render(){ <List> {this.state.allProducts.map(product => { const productIsInList = this.state.productsInList.find( p => p.id === product.id ); return ( <ListItem key={product.id}> <Body> <Text style={{ color: productIsInList ? '#bbb' : '#000' }} > {product.name} </Text> { productIsInList && <Text note> {'Already in shopping list'} </Text> } </Body> </ListItem> ); } )} </List> } ... Inside our render method, we will use an Array.map function to iterate and print each possible product, checking if the product is already added to the current shopping list to display a note, warning the user: Already in shopping list. Of course, we still need to add a better layout, buttons, and event handlers for all the possible user actions. Let's start improving our render method to put all the functionality in place. As it happened with the ShoppingList screen, we want the user to be able to interact with our AddProduct component, so we will add some event handlers to respond to some user actions. Our render method should then look something like this: /*** AddProduct.js ***/ ... render() { return ( <Container> <Content> <List> {this.state.allProducts.map(product => { const productIsInList = this.state.productsInList. find(p => p.id === product.id); return ( <ListItem key={product.id} onPress={this._handleProductPress.bind (this, product)} > <Body> <Text style={{ color: productIsInList? '#bbb' : '#000' }} > {product.name} </Text> { productIsInList && <Text note> {'Already in shopping list'} </Text> } </Body> <Right> <Icon ios="ios-remove-circle" android="md-remove-circle" style={{ color: 'red' }} onPress={this._handleRemovePress.bind(this, product )} /> </Right> </ListItem> ); })} </List> </Content> <Fab style={{ backgroundColor: '#5067FF' }} </Fab> </Container> ); } ... There are three event handlers responding to the three press events in this component: - On the blue <Fab> button, which is in charge of adding new products to the products list - On each <ListItem>, which will add the product to the shopping list - On the delete icons inside each <ListItem> to remove this product from the list of the products, which can be added to the shopping list Let's start adding new products to the available products list once the user presses the <Fab> button: /*** AddProduct.js ***/ ... _handleAddProductPress() { prompt( 'Enter product name', '', [ { text: 'Cancel', style: 'cancel' }, { text: 'OK', onPress: this.addNewProduct.bind(this) } ], { type: 'plain-text' } ); } ... We are using here the prompt function from the react-native-prompt-android module. Despite its name, it's a cross-platform prompt-on-a-pop-up library, which we will use to add products through the addNewProduct function we created before. We need to import the prompt function before we use it, as follows: import prompt from 'react-native-prompt-android'; And here is the output: Once the user enters the name of the product and presses OK, the product will be added to the list so that we can move to the next event handler, adding products to the shopping list when the user taps on the product name: /*** AddProduct.js ***/ ... _handleProductPress(product) { const productIndex = this.state.productsInList.findIndex( p => p.id === product.id ); if (productIndex > -1) { this.setState({ productsInList: this.state.productsInList.filter( p => p.id !== product.id ) }); this.props.navigation.state.params.deleteProduct(product); } else { this.setState({ productsInList: this.state.productsInList.concat(product) }); this.props.navigation.state.params.addProduct(product); } } ... This handler checks if the selected product is on the shopping list already. If it is, it will remove it by calling deleteProduct from the navigation state and also from the component's state by calling setState . Otherwise, it will add the product to the shopping list by calling addProduct in the navigation state and refresh the local state by calling setState. Finally, we will add an event handler for the delete icon on each of the <ListItems>, so the user can remove products from the list of available products: /*** AddProduct.js ***/ ... async _handleRemovePress(product) { this.setState({ allProducts: this.state.allProducts.filter(p => p.id !== product.id) }); await AsyncStorage.setItem( '@allProducts', JSON.stringify( this.state.allProducts.filter(p => p.id !== product.id) ) ); } ... We need to remove the product from the component's local state, but also from the AsyncStorage so it doesn't show during later runs of our app. We have all the pieces to build our AddProduct screen, so let's take a look at the general structure of this component: import React from 'react'; import prompt from 'react-native-prompt-android'; import { AsyncStorage } from 'react-native'; import { ... } from 'native-base'; export default class AddProduct extends React.Component { static navigationOptions = { title: 'Add a product' }; constructor(props) { ... } async componentWillMount() { ... } async addNewProduct(name) { ... } /*** User Actions Handlers ***/ _handleProductPress(product) { ... } _handleAddProductPress() { ... } async _handleRemovePress(product) { ... } /*** Render ***/ render() { .... } } We have a very similar structure to the one we built for ShoppingList : the navigatorOptions constructor building the initial state, user action handlers, and a render method. In this case, we added a couple of async methods as a convenient way to deal with AsyncStorage. Running our app on a simulator/emulator is a very reliable way to feel how our app will behave in a mobile device. We can simulate touch gestures, poor network connectivity environments, or even memory problems, when working in simulators/emulators. But eventually, we would like to deploy the app to a physical device, so we could perform a more in-depth testing. There are several options to install or distribute an app built in React Native, the direct cable connection being the easiest one. Facebook keeps an updated guide on how to achieve direct installation on React Native's site (), but there are other alternatives when the time comes to distribute the app to other developers, testers, or designated users. Testflight () is an awesome tool for distributing the app to beta testers and developers, but it comes with a big drawback--it only works for iOS. It's really simple to set up and use as it is integrated into iTunes Connect, and Apple considers it the official tool for distributing apps within the development team. On top of that, it's absolutely free, and it's usage limits are quite large: - Up to 25 testers in your team - Up to 30 devices per tester in your team - Up to 2,000 external testers outside your team (with grouping capabilities) In short, Testflight is the platform to choose when you target your apps only to iOS devices. Since, in this book, we want to focus on cross-platform development, we will introduce other alternatives to distribute our apps to iOS and Android devices from the same platform. Diawi () is a website where developers can upload their .ipa and .apk files (the compiled app) and share the links with anybody, so the app can be downloaded and installed on any iOS or Android device connected to the internet. The process is simple: - Build the .ipa(iOS) / .apk(Android) in XCode/Android studio. - Drag and drop the generated .ipa/ .apkfile into Diawi's site. - Share the link created by Diawi with the list of testers by email or any other method. Links are private and can be password protected for those apps with the higher need of security. The main downside is the management of the testing devices, as once the links are distributed, Diawi loses control over them, so the developer cannot know which versions were downloaded and tested. If managing the list of testers manually is an option, Diawi is a good alternative to Testflight. If we are in need of managing what versions were distributed to which testers and whether they have already started testing the app or not, we should give Installr () a try, since functionality-wise it is quite similar to Diawi, but it also includes a dashboard to control who are the users, which apps were sent to them individually, and the status of the app in the testing device (not installed, installed, or opened). This dashboard is quite powerful and definitely a big plus when one of our requirements is to have good visibility over our testers, devices, and builds. The downside of Installr is its free plan only covers three testing devices per build, although they offer a cheap one-time pay scheme in case we really want to have that number increased. It's a very reasonable option when we are in need of visibility and cross-platform distribution. During the course of this chapter, we learned how to start up a React Native project, building an app which includes basic navigation and handling several user interactions. We saw how to handle persistent data and basic states using the navigation module, so we could transition through the screens in our project. All these patterns can be used to build lots of simple apps, but in the next chapter, we will dive deeper into more complex navigation patterns and how to communicate and process external data fetched from the internet, which will enable us to structure and prepare our app for growing. On top of that, we will use MobX, a JavaScript library, for state management, which will make our domain data available to all the screens inside our app in a very simple and effective way.      Â
https://www.packtpub.com/product/react-native-blueprints/9781787288096
CC-MAIN-2020-40
refinedweb
5,345
60.45
Purpose: This demo uses the basal ganglia model to cycle through a 5 element sequence, where an arbitrary start can be presented to the model. Comments: This basal ganglia is now hooked up to a memory and includes routing. The addition of routing allows the system to choose between two different actions: whether to go through the sequence, or be driven by the visual input. If the visual input has its value set to 0.8*START+D (for instance), it will begin cycling through at D->E, etc. The 0.8 scaling helps ensure start is unlikely to accidently match other SPAs (which can be a problem in low dimensional examples like this one). The ‘utility’ graph shows the utility of each rule going into the basal ganglia. The ‘rule’ graph shows which one has been selected and is driving thalamus. Usage: When you run the network, it will go through the sequence forever, starting at D. You can right-click the SPA graph and set the value to anything else (e.g. ‘0.8*START+B’) and it will start at the new letter and then begin to sequence through. The point is partly that it can ignore the input after its first been shown and doesn’t perseverate on the letter as it would without gating. Output: See the screen capture below. from spa import * D=16 class Rules: #Define the rules by specifying the start state and the #desired next state def start(vision='START'): set(state=vision) def A(state='A'): #e.g. If in state A set(state='B') # then go to state B def B(state='B'): set(state='C') def C(state='C'): set(state='D') def D(state='D'): set(state='E') def E(state='E'): set(state='A') class Routing(SPA): #Define an SPA model (cortex, basal ganglia, thalamus) dimensions=16 state=Buffer() #Create a working memory (recurrent network) #object: i.e. a Buffer vision=Buffer(feedback=0) #Create a cortical network object with no #recurrence (so no memory properties, just #transient states) BG=BasalGanglia(Rules) #Create a basal ganglia with the prespecified #set of rules thal=Thalamus(BG) # Create a thalamus for that basal ganglia (so it # uses the same rules) input=Input(0.1,vision='0.8*START+D') #Define an input; set the input #to state 0.8*START+D for 100 ms model=Routing()
http://ctnsrv.uwaterloo.ca/docs/html/demos/spa_sequencerouted.html
CC-MAIN-2017-47
refinedweb
398
60.85
Model Relationships and "list_display" Yesterday I had one of my coworkers ask me what I thought to be a simple Django question: "in the admin pages im trying to show fields from different tables but it wont let me." I clarified the problem with this chap, and eventually suggested using something like this: from django.contrib import admin from project.app.models import AwesomeModel class AwesomeModelAdmin(admin.ModelAdmin): list_display = ('fk_field__fk_attr1', 'fk_field2__fk_attr') admin.site.register(AwesomeModel, AwesomeModelAdmin) The Problem As it just so happens, that does not work with Django as of SVN revision 9907 (or any previous versions I presume). You cannot span relationships in your Django models from the list_display item in your model admin classes. This completely caught me off guard, so I looked through the docs a bit. I found some work-arounds for the issue, but they all seemed pretty ridiculous and, more importantly, violations of the DRY principle. It also surprised me that I hadn't noticed this problem before! I guess that's why it's not fixed yet--it's not really required by all that many people? Anyway, I did a bit of research into the issue. I stumbled upon a ticket which appears to be aimed at resolving this problem. The ticket is pretty old, and it looks like it's still up in the air as to whether or not the patches will be applied to trunk. That wasn't very encouraging. A Solution Being the nerd that I am, I set out to find an "efficient" solution of my own for the problem, without meddling with the Django codebase itself. Below you will find my attempt at some pure Python hackery (no Django involved other than overriding a method) to make our lives easier until someone with some pull in the Django community gets something better into Django's trunk. Disclaimer: this might well be the absolute worst way to approach the problem. I'm okay with that, because I still like the results and I learned a lot while producing them. I don't have any benchmarks or anything like that, but I wouldn't complain if someone else came up with some and shared them in the comments. from django.contrib import admin def mygetattr(obj, hier): """ Recursively attempts to find attributes across Django relationships. """ if len(hier): return mygetattr(getattr(obj, hier[0]), hier[1:]) return obj def dynamic_attributes(self, attr, *args, **kwargs): """ Retrieves object attributes. If an attribute contains '__' in the name, and the attribute doesn't exist, this method will attempt to span Django model relationships to find the desired attribute. """ try: # try to get the attribute the normal way return super(admin.ModelAdmin, self).__getattribute__(attr, *args, **kwargs) except AttributeError: # the attribute doesn't exist for the object. See if the attribute has # two underscores in it (but doesn't begin with them). if attr and not attr.startswith('__') and '__' in attr: # it does! make a callable for the attribute new_attr = lambda o: mygetattr(o, attr.split('__')) # add the new callable to the object's attributes setattr(self, attr, new_attr) # return the callable return new_attr # override the __getattribute__ method on the admin.ModelAdmin class admin.ModelAdmin.__getattribute__ = dynamic_attributes This code could be placed, for example, in your project's root urls.py file. That would make it so that all of the apps in your project could benefit from the relationship spanning. Alternatively, you could place it in the admin.py module for a specific application. It would just need to be someplace that was actually processed when your site is "booted up." Basically this code will override the built-in __getattribute__ method for the django.contrib.admin.ModelAdmin class. When an attribute such as fk_field__fk_attr1 is requested for an object, the code will check to see if an attribute already exists with that name. If so, the existing attribute will be used. If not, it chops up the attribute based on the __ (double underscores) that it can find. Next, the code does some recursive getattr() calls until it runs out of relationships to hop across so it can find the attribute you really want. Once all of that is done, the end result is placed in a callable attribute for the respective admin.ModelAdmin subclass so it won't have to be built again in the future. The new callable attribute is what is returned by the __getattribute__ function. Caveats Now, there are some funky things that you must be aware of before you go an implement this code on your site. Very important things, I might add. Really, I've only found one large caveat, but I wouldn't be surprised if there are others. The biggest issue is that you have to define the list_display attribute of your ModelAdmin class after you register the model and the admin class with the admin site (see below for an example). Why? Because when the Django admin validates the model's admin class, it checks the items in the list_display. If it can't find a callable attribute called fk_field__fk_attr1 during validation, it will complain and the model won't be registered in the Django admin site. The dynamic attributes that are built by my hackery are added to the object after it is validated (more accurately, when the list page is rendered, from what I have observed). This is by far the most disgusting side effect of my hackery (at least that I have observed so far). I don't like it, but I do like it a lot more than defining loads of callables in several ModelAdmin classes just to have something simple show up in the admin's list pages. You're free to form your own opinions. Using the original code I offered to my coworker, this is how it would have to look in order for my hack to work properly. from django.contrib import admin from project.app.models import AwesomeModel class AwesomeModelAdmin(admin.ModelAdmin): pass admin.site.register(AwesomeModel, AwesomeModelAdmin) AwesomeModelAdmin.list_display = ('fk_field__fk_attr1', 'fk_field2__fk_attr') See, I told you it was funky. But again, I'll take it. If any of you have thoughts for how this could be improved, please share. Constructive criticism is very welcome and encouraged. Please also consider reviewing the ticket that is to address this problem.
http://www.codekoala.com/blog/2009/model-relationships-and-list_display/
CC-MAIN-2013-48
refinedweb
1,059
55.84
open(2) open(2) open - open for reading or writing #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> int open (const char *path<b>, int oflag<b>, ... /* mode_t mode <b>*/); path points to a path name naming a file. open opens a file descriptor for the named file and sets the file status flags according to the value of oflag. oflag values are constructed by OR-ing Flags from the following list (only one of the first three flags below may be used): O_RDONLY Open for reading only. O_WRONLY Open for writing only. O_RDWR Open for reading and writing. O_NDELAY or O_NONBLOCK These flags may affect subsequent reads and writes [see read(2) and write(2)]. If both O_NDELAY and O_NONBLOCK are set, O_NONBLOCK will take precedence. When opening a FIFO with O_RDONLY or O_WRONLY set: If O_NDELAY or O_NONBLOCK is set: An open for readingonly will return without delay; an open for writing-only will return an error if no process currently has the file open for reading. If O_NDELAY and O_NONBLOCK are clear: An open for reading-only will block until a process opens the file for writing; an open for writing-only will block until a process opens the file for reading. When opening a file associated with a communication line: If O_NDELAY or O_NONBLOCK is set: The open will return without waiting for the device to be ready or available; subsequent behavior of the device is device specific. If O_NDELAY and O_NONBLOCK are clear: The open will block until the device is ready or available. Page 1 open(2) open(2) O_APPEND If set, the file pointer will be set to the end of the file prior to each write. O_SYNC When opening a regular file, this flag affects subsequent writes. If set, each write(2) will wait for both the file data and file status to be physically updated. O_DSYNCWhen opening a regular file, this flag affects subsequent writes. If set, each write(2) will wait for the file data to be physically updated. O_RSYNCWhen opening a regular file, this flag affects subsequent reads. If set, and either O_SYNC or O_DSYNC is set then each read(2) will wait for the file to be in the same state as O_SYNC or O_DSYNC would require for writes. O_NOCTTY If set and the file is a terminal, the terminal will not be allocated as the calling process's controlling terminal. O_CREATIf the file exists, this flag has no effect, except as noted under O_EXCL below. Otherwise, the file is created and the owner ID of the file is set to the effective user IDs of the process, the group ID of the file is set to the effective group IDs of the process or to the group ID of the directory in which the file is being created. This is determined as follows: If the underlying filesystem was mounted with the BSD file creation semantics flag [see fstab(4)] or the S_ISGID bit is set [see chmod(2)] on the parent directory, then the group ID of the new file is set to the group ID of the parent directory, otherwise it is set to the effective group ID of the calling process. If the group ID of the new file does not match the effective group ID or one of the supplementary groups IDs, the S_ISGID bit is cleared. The access permission bits of the file mode are set to the value of mode, modified as follows [see creat(2)]: All bits set in the file mode creation mask of the process are cleared [see umask(2)]. The ``save text image after execution bit'' of the mode is cleared [see chmod(2)]. O_TRUNCIf. It's effect on other files types is implementation-dependent. The result of using O_TRUNC with O_RDONLY is undefined. Page 2 open(2) open(2) O_EXCL If O_EXCL and O_CREAT are set, open will fail if the file exists. The check for the existence of the file and the creation of the file if it does not exist is atomic with respect to other processes executing open naming the same filename in the same directory with O_EXCL and O_CREAT set. O_LCFLUSH If set, when all copies of a file descriptor are closed, all modified buffers associated with the file will be written back to the physical medium. O_LCFLUSH is a Silicon Graphics extension. O_LCINVAL If set, when all copies of a file descriptor are closed, all modified buffers associated with the file will be written back to the physical medium, then invalidated, immediately freeing the buffers for other use. The process doing the last close must be able to aquire write access at the time of last close for the cache to be invalidated. O_LCINVAL is a Silicon Graphics extension. O_DIRECT If set, all reads and writes on the resulting file descriptor will be performed directly to or from the user program buffer, provided appropriate size and alignment restrictions are met. Refer to the F_SETFL and F_DIOINFO commands in the fcntl(2) manual entry for information about how to determine the alignment constraints. O_DIRECT is a Silicon Graphics extension and is only supported on local EFS and XFS file systems, and remote BDS file systems. When opening a STREAMS file, oflag may be constructed from O_NDELAY or O_NONBLOCK OR-ed with either O_RDONLY, O_WRONLY , or O_RDWR. Other flag values are not applicable to STREAMS devices and have no effect on them. The values of O_NDELAY and O_NONBLOCK affect the operation of STREAMS drivers and certain system calls [see read(2), getmsg(2), putmsg(2), and write(2)]. For drivers, the implementation of O_NDELAY and O_NONBLOCK is device specific. Each STREAMS device driver may treat these options differently. When open is invoked to open a named stream, and the connld module [see connld(7)] has been pushed on the pipe, open blocks until the server process has issued an I_RECVFD ioctl [see streamio(7)] to receive the file descriptor. If path is a symbolic link and O_CREAT and O_EXCL are set, the link is not followed. The file pointer used to mark the current position within the file is set to the beginning of the file. The new file descriptor is the lowest numbered file descriptor available and is set to remain open across exec system calls [see fcntl(2)]. Page 3 open(2) open(2) Certain flag values can be set following open as described in fcntl(2).. There is a system enforced limit on the number of open file descriptors per process {OPEN_MAX}, whose value is returned by the getdtablesize(2) function. The named file is opened unless one or more of the following are true: EACCES The file does not exist and write permission is denied by the parent directory of the file to be created. EACCES O_CREAT or O_TRUNC is specified and write permission is denied. EACCES A component of the path prefix denies search permission. EACCES The file is a character or block device file and the file system in which it resides has been mounted with the nodev option [see fstab(4)]. EACCES oflag permission is denied for an existing file. EAGAIN The file exists, O_CREAT or O_TRUNC are specified, mandatory file/record locking is set, and there are outstanding record locks on the file [see chmod(2)]. EBUSY path points to a device special file and the device is in the closing state. EDQUOT O_CREAT is specified, the file does not exist, and the directory in which the entry for the new file is being placed cannot be extended either because the user's quota of disk blocks on the file system containing the directory has been exhausted or the user's quota of inodes on the file system on which the file is being created has been exhausted. EEXIST O_CREAT and O_EXCL are set, and the named file exists. EFAULT path points outside the allocated address space of the process. EINTR A signal was caught during the open system call. Page 4 open(2) open(2) EINVAL An attempt was made to open a file not in an EFS, XFS or BDS file system with O_DIRECT set. EIO A hangup or error occurred during the open of the STREAMS-based device. EISDIR The named file is a directory and oflag is write or read/write. ELOOP Too many symbolic links were encountered in translating path. EMFILE The process has too many open files [see getrlimit(2)]. ENAMETOOLONG The length of the path argument exceeds {PATH_MAX}, or the length of a path component exceeds {NAME_MAX} while {_POSIX_NO_TRUNC} is in effect. ENFILE The system file table is full. ENODEV path points to a device special file and the device is not in the activated state. ENOENT O_CREAT is not set and the named file does not exist. ENOENT O_CREAT is set and a component of the path prefix does not exist or is the null pathname. ENOMEM The system is unable to allocate a send descriptor. ENOSPC O_CREAT and O_EXCL are set, and the file system is out of inodes or the directory in which the entry for the new file is being placed cannot be extended because there is no space left on the file system containing the directory. ENOSPC O_CREAT is set and the directory that would contain the file cannot be extended. ENOSR Unable to allocate a stream. ENOTDIRA component of the path prefix is not a directory. ENXIO The named file is a character special or block special file, and the device associated with this special file does not exist. ENXIO O_NDELAY or O_NONBLOCK is set, the named file is a FIFO, O_WRONLY is set, and no process has the file open for reading. ENXIO A STREAMS module or driver open routine failed. EOPNOTSUPP An attempt was made to open a socket (not currently supported). Page 5 open(2) open(2) EPERM path points to a device special file, the device is in the setup state, and the calling process does not have the P_DEV privilege. ETIMEDOUT The object of the open is located on a remote system which is not available [see intro(2)]. EROFS The named file resides on a read-only file system and either O_WRONLY, O_RDWR, O_CREAT, or O_TRUNC is set in oflag (if the file does not exist). chmod(2), close(2), creat(2), dup(2), exec(2), fcntl(2), getdtablesize(2), getmsg(2), getrlimit(2), intro(2), lseek(2), putmsg(2), read(2), stat(2), stat(5), umask(2), write(2) Upon successful completion, the file descriptor is returned. Otherwise, a value of -1 is returned and errno is set to indicate the error. PPPPaaaaggggeeee 6666
https://nixdoc.net/man-pages/IRIX/man2/open.2.html
CC-MAIN-2020-45
refinedweb
1,781
68.5
>>/US/UN - More US sanctions on Iran likely but not on oil, gas Released on 2012-10-12 10:00 GMT dynamic in the politic/diplomatic arena. As Kamran says, they are trying to give a signal to Iran, you cooperate with us or you are against us, our conditions or we'll strike, but both scenarios are negative for global econ. We have to remember too that there's an upcoming election in the US and it's important for Obama to consolidate his position toward voters. I am curious to know if the "sources" in the IAEA report weren't the same of the Iraqi case...at the end, I believe it's going to be a one player decision. On 11/9/11 8:29 AM, Matt Mawhinney wrote: I agree this is puzzling. Perhaps the US overestimated European willingness to go along with sanctioning the CBI and now they are back peddling. It's not as if they just realized that going after Iran's ability to sell its crude oil would have serious economic consequences. So you have to wonder how serious the Obama Administration was about sanctioning the CBI from the beginning. It may be the rhetorical equivalent of Israel saying it is going to bomb Iran. For now the US decided it's going to tighten existing sanctions and perhaps add a few new ones. The revelation of the Saudi plot combined with the comprehensive picture painted in the new IAEA report may be a good leverage piece with Japan, Taiwan, and the EU in convinving them to import less oil and invest less in Iran's economy. This won't change Iran's behavior, but it will strain the economy even more. On 11/9/11 7:05 AM, Kamran Bokhari wrote: The idea is that Iran should fear that since sanctions aren't working that raises the probability of a military strike. That said by that same logic the Iranians can conclude that if tougher sanctions are too risky for the global econ then a military move would be worse. So, it is puzzling. Sent via BlackBerry by AT&T ---------------------------------------------------------------------- From: Emre Dogru <emre.dogru@stratfor.com> Sender: analysts-bounces@stratfor.com Date: Wed, 9 Nov 2011 06:43:09 -0600 (CST) To: Analyst List<analysts@stratfor.com> ReplyTo: Analyst List <analysts@stratfor.com> Subject: Fwd: [OS] IRAN/US/UN - More US sanctions on Iran likely but not on oil, gas Why is the whole IAEA report noise, then? We know that the US decided not to impose sanctions on Iran's CB due to Europe's opposition and the fear of rising oil prices (the newspaper reports about the quarrel with Iran itself increased brent by $2 yesterday). This report suggests that there won't be sanctions on oil and gas sector either. Why try to put pressure on Iran (which I think Iran does not care) if the Western powers are not able to follow up with additional sanctions? ---------------------------------------------------------------------- From: "Yaroslav Primachenko" <yaroslav.primachenko@stratfor.com> To: "The OS List" <os@stratfor.com> Sent: Tuesday, November 8, 2011 11:28:23 PM Subject: [OS] IRAN/US/UN - More US sanctions on Iran likely but not on oil, gas More US sanctions on Iran likely but not on oil, gas 11/8/11 The United States may impose more sanctions on Iran, possibly on commercial banks or front companies, but is unlikely to go after its oil and gas sector or its central bank for now, a US official said on Tuesday. "I think you will see bilateral sanctions increasing," the official, speaking on condition of anonymity, told Reuters after the International Atomic Energy Agency, the UN nuclear watchdog, said Iran has worked on developing an atomic bomb design and may still be conducting relevant research. -- Yaroslav Primachenko Global Monitor STRATFOR -- -- Emre Dogru STRATFOR Cell: +90.532.465.7514 Fixed: +1.512.279.9468 emre.dogru@stratfor.com -- Matt Mawhinney ADP STRATFOR 221 W. 6th Street, Suite 400 Austin, TX 78701 T: 512.744.4300 | M: 267.972.2609 | F: 512.744.4334 -- Carlos Lopez Portillo M. ADP STRATFOR M: +1 512 814 9821
http://www.wikileaks.org/gifiles/docs/17/175826_re-fwd-os-iran-us-un-more-us-sanctions-on-iran-likely-but.html
CC-MAIN-2015-06
refinedweb
694
62.27
I am having a problem with the below applet. It compiles just fine and when it is ran it shows up with the multiplication problem at the bottom of the applet as expected. The problem is that it says if the answer is correct or not before an answer is input. After I input an answer it doesn't seem to check that either. I know it has something to do with the boolean statements but I can't seem to adjust it to work properly. Any help would be great. import javax.swing.*; import java.awt.*; import java.awt.event.*; public class multiplicationGame extends JApplet implements ActionListener { int num1, num2; int random; int answer; JTextField guess = new JTextField(4); // Holds the answer the player will enter JLabel message = new JLabel("Type your answer and press enter: "); // JLabels to hold messages to output on the screen. JLabel message2 = new JLabel(" "); boolean correct = false; boolean question = true; public void init() { Container con = getContentPane(); setLayout(new FlowLayout()); add(message); add(guess); add(message2); guess.addActionListener(this); } public void paint(Graphics g) { if(question == true) { super.paint(g); newQuestion(); boolean question = false; } if( correct == true) { message2.setText("Very good!"); newQuestion(); } else { message2.setText("No. Please try again."); } } public void actionPerformed(ActionEvent event) { int answer = Integer.parseInt(guess.getText()); if(answer == random) { boolean correct = true; } else { boolean correct = false; } repaint(); } public void newQuestion() { int num1 = 1 + (int)(Math.random() * 9); int num2 = 1 + (int)(Math.random() * 9); int random = num1 * num2; showStatus("How much is " + num1 + " times " + num2 + "."); } }
https://www.daniweb.com/programming/software-development/threads/348868/multiplication-applet
CC-MAIN-2017-26
refinedweb
254
57.37
. Alex, In this function you are returning an address, however inside the function x is multiplied by 2. You can’t return both so are they multiplied and given a new address before being returned or what? You can’t return the address of a local variable because that variable gets destroyed at the end of the function. In cases like this, return by value is really more appropriate. If you want to return something by address, you’ll have to make sure the address you’re returning isn’t destroyed. You can do that in several ways: by returning the address of a static local variable (this has other downsides), or passing in another parameter by address that you can return (if the caller passes in a variable, it should still be valid when the function returns back to the caller). There may be other ways I’m not thinking of right now. But really this example just serves to show a simple point. Use the most appropriate return type for the situation, which is not return by address here. 🙂 Hi Alex I learned more abt this dogged question I had above: Correct me if I am wrong but but the & in front of getElement tells the return value not to copy but pass the function return value by reference. I also created this in main: Do I need to delete this or will it automatically do so when main goeso ut of scope? You are correct about the & in the example above. As for the dynamic allocation, you need to delete it yourself. It will not be deleted when array goes out of scope. Alex, Nice lesson yet again. I noticed that a function returning reference can be used both as an lvalue as well as an rvalue. I just tried the same with a function returning address (dereferencing the returned pointer), and it works fine: Is this proper? I want to know if my compiler is ‘helping me out’ to do something out of the standard. BTW I found out some typos as usual: 😀 > When returning an (a) built-in array or pointer (use return by address) > When we call getElement(array, 10), getElement() returns a reference to the element of the array inside array (<- what’s this?) that has the index 10. Yes, this is fine. Wording fixed. 🙂 Thanks for pointing those out. Hi Alex, I took the initiative and wrote full programs (small) for each quiz above, but quiz 5 is in question below. I would recommend that you might encourage other students to do the same. It may "cement" what’s learned better, apply what’s learned to a situational context ridding some of the examples’ isolation. It seems to be at least for me good practice and a lot of fun. Below is my function code for the last quiz. I don’t understand fully the usage of & in front of the function name in your example; if I put an ampersand in front of array_element it seems to function correctly - I can access array_element back in main, the result array_element is changed at the source address, no copies. What am I misunderstanding here. Secondly can you please explain what the ampersand & does at the processor/memory level (the abridged version). In this context, we’re using & to mean “reference to”. When we have parameter “int array_element”, that means the argument is going to be passed by value -- that is, whatever value we passed in will be copied into array_element. If we were to change array_element, the argument we passed in would not be changed, because array_element is a copy, and changing a copy doesn’t change the original. However, if instead we have parameter “int &array_element”, that means the argument is going to be passed by reference -- that is, whatever value we passed in, array_element will reference that argument. Any changes we make to array_element will be make to the argument as well. This is more efficient because no copies are made. Finally, when we use an ampersand as part of the return type, like this: That means this function is returning a reference to a const integer. Without the ampersand, this function would return a copy of the value at array[index] back to the caller. With the ampersand, we return a const reference to the actual array element. This allows the caller to access the array element (to print it, for example) without having to make a copy. I see my mistakes. First I thought arrays were passed implicitly by reference. Second I thought you could put the & in front of n_array: int * get_element (int * & n_array, int n_input_index, int &array_element) to reference it. Finally index - its passed in a return function without being referenced: is index copied? Yes, the index is passed by value and is copied. It’s generally fine to copy fundamental data types. Hi. Scope of variable defines where it is accessible and not where is created and destroyed. That is defined by duration of variable. So in this code: ----------------------- ------------------------- variable x goes out of scope (we can not access x directly after closing brace), but it is not destroyed (because it has static duration) and we can access it "indirectly" (through reference). BTW: Great tutorials. Good catch. I’ve fixed the wording. Thanks! Thank you! Dear Alex, besides greetings for your excellent explanations in this tutorial, can you explain, please, where is stored the value of "value" returned by the function, in the case of returning by value (and it works when the function is called(examp.1)), and where is stored the value of "&value" in the case of returning by address, and it does’n work when the function is called(examp.2). In both cases "value" is a local variable. Thanks a lot. [ examp. 1: this one works int doubleValue(int x) { int value = x * 2; return value ; // A copy of value will be returned here } // value goes out of scope here examp.2: this one does’n works int* doubleValue(int x) { int value = x * 2; return &value; // return nValue by address here } // value destroyed here ] Both of these functions work similarly. The difference is that in the top example, the value of variable value is copied to the caller. Because this is a copy, it doesn’t matter what happens to local variable value after that point. In the bottom example, the address of local variable value is returned to the caller. Because local variable value then goes out of scope and is destroyed, this address becomes invalid. Hi Alex Greetings for your excellent blog. I have a question and i need help . I have a source code with three files. There is a file called MEDICAO.c . In this file there is a function called LER_VRMS ( ). In this file, there is a function called LER_VRMS ( ) , within this function, is called other function called LER_REG_24_BITS ( ) that is located inside another file.(DRIVER_ADE.c) After being executed, the return value should be assigned to the array [ ] . But that is not happening , I get a completely different value. Note: Array is declarared as global in file MEDIÇÃO.c Regards Leonardo Hi Leonardo. I can’t even begin to guess what the problem might be from the information you’ve given me. My advice is use the debugger to step through your program and watch it execute. Put watches on the variables you’re interested in and see where their values are getting changed. Hello Alex, In the solution of quiz 5, I think, there’s a mistake now: It is supposed to return an array element (no copy), so IMHO it should be either: or Yep, thanks for pointing this out. I’ve updated the quiz to address this. Hi Alex! In quiz 1, I think it should be "…and returns the sum of all the number_S_ between 1 and the input number". In quiz 4 and 5, isn’t it safer to declare int *array as const? Correct on all counts. I’ve updated the article. Thanks! Hi Alex, Congrats for the great tutorial. By far the best of C++ I’ve ever come across! One question, in the following code, that you have used when explaining ‘return by address’, after using the delete operator, shouldn’t we set ‘array’ to be a null pointer? (array = 0) Otherwise wouldn’t the pointer ‘array’ end up as a dangling pointer?: Thanks Mario We could, but since the program ends immediately after, it doesn’t really matter. Dangling pointers won’t do any harm when the program ends. Good point! Thank you. I still have a small missing part, that is related to assignments. On regular assignments a = b, b is copied and assigned to a. When we declare a as reference (SomeStruct &a = b), b i not copied. Now, what happens when instead of b we have a function? Thanks, Eli Good question. > SomeStruct& a = ByReferenceFunction(); // no copy is made a references the return value, no copy is made. > SomeStruct& b = ByValueFunction(); // a copy is made before function returns, and b now points to it This won’t compile. You can’t set a reference to a rvalue because the return value has no memory address. However, you could do this: In this case, the lifetime of the copied return value is extended to the lifetime of the const reference. > SomeStruct c = ByReferenceFunction(); // UNCLEAR: no copy is made by the function, but is the returned value copied to c? This returns a reference to some value, but because c is not a reference, a copy is made and placed in c. So this is essentially the same thing as not returning a reference. > SomeStruct d = ByValueFunction(); // UNCLEAR: a copy is made before function returns, but is it copied again before assigning it to d? No, the return value is copied into d. In this tutorial you seem to start doing something that you earlier (6.7) said we shouldn’t do: But in this lesson: …..so, the * next to int. And several other places in this tutorial. So me is getting confused, is there some special reason for this or……. Yes, there is a slight difference. In the case where we’re declaring a variable, putting the asterisk next to the variable name helps us avoid this mistake: (IMO, C++ made a syntactical mistake here, they really should both be pointers to int, but that’s not the case) However, with a function return value where there’s only one return value: Makes it clear that the function is returning a value of type int*. If you do this instead: It’s easy to misinterpret this as a pointer to a function that returns an plain int. For that reason, I find using attaching the * to the type for return values clearer. Make sense? I’ve updated lesson 6.7 to differentiate these two cases. I have a question about the following program: int& Value(FixedArray25 &rArray, int nIndex) { return rArray.anValue[nIndex]; } Can we consider a reference as a dereferenced pointer? If we can consider so, when we call the function above, the Value can be consider as a pointer to rArray.anValue[nIndex], but when the function is finished, the rArray.anValue[nIndex] will go out of scope, and the Value will pointing to a garbage. Is there something wrong? Thank you for your good tutorial!!! Yes, one way to think about references is to consider them as implicitly-dereferenced pointers. Another way is to consider them as aliases to the underlying data. I think your function should work fine. When the function terminates, the rArray reference goes out of scope, but the underlying data (the value you passed in for rArray) does not. Consequently, the reference returned by Value is still valid after the function terminates. HI Alex. Many thanks to you. You are crafting programmers out of many PS. The latter example in the text works for this same reason, the struct is declared in gobal scope and therefore still keeps on existing after the function call. I found this sentence (from MSVC++ documentation) to be very clarifying: “A reference holds the address of an object, but behaves syntactically like an object.” Therefore, the object which is passed back to the caller must keep on existing (and it’s address as well) when the function goes out of scope. This is why you can not pass reference to a local variable which ceases to exist. If you change the orginal sample code so that nValue is declared in global scope it will work because nValue object stays alive. int nValue; // declared in global scope. int& DoubleValue(int nX) { //int nValue = nX * 2; nValue = nX * 2; return nValue; } BTW. Sunnys program does return the same error message here (Qt & MinGW) and the math is wrong: “warning: reference to local variable ‘nValue’ returned [enabled by default]” Hi Alex, As Mention in your article ‘Return by reference’ local variable cannot be return because they go out of scope when function returns. But I did not receive any compile error program is running fine I am using visual studio 2010 (vc++) #include using namespace std; int main() { int &DoubleValue(int x); int number; number = DoubleValue(10); cout << "Number = " << number << endl; cin.clear(); cin.ignore(255, '\n'); cin.get(); return 0; } int& DoubleValue(int nX) { int nValue = nX * 2; return nValue; // return a reference to nValue here } what is the diffrence betn int &DoubleValue(int x); and int& DoubleValue(int nX) ?i am stucked in int& and int &!!!!please understand this They are the same thing. It’s possible that since you’re printing the value of number immediately, the value of nValue in the function hasn’t actually been destroyed/overwritten yet. Trying to access a reference to a variable that has been destroyed will lead to unpredictable results, and should be avoided. Hello! In the following code, is the dynamic array deleted properly? Thanks 🙂 Yes. I try to return the value of the local variable by using return by reference. It is possible to do that. I am getting the correct values. Compiler is only giving waring…warning C4172: returning address of local variable or temporary plese see the code below and kindly give your comments.. Returning a reference to a local variable is something you shouldn’t do, as you’d be returning a reference to a value that is being destroyed when the function ends. How does a varible passed by reference, change the original variable’s data while being modified at once? (I know this sounds confusing but please try to understand 🙂 A reference works like an alias for the original data. You’re not changing the value of the reference itself, because references are in a sense immutable. You’re changing the value of the data the reference aliases. It’s like modifying the dereferenced value of a pointer. Hey Alex and also everyone on this site. This is an extremely great site, I’ve “favorite’d’ hhaha many c++ sites around the web, but goodness, this is some good stuff here..Even though this section just whizzes right over my head, I havnt seen a site that explains c++ this well. Anyway, 7-7.4 thus far is smoking my brain….is there another angle to look at this section from, haha, seriously though. Five aka Mersinarii 🙂 Thaaaaank u Mr,Alex .. useful tutorial Can you use the static keyword with a local variable in order to return it by reference or address? Thanks for these tutorials by the way. Yes. I’ve updated the tutorial to include an example that does just that. Functions can be lvalues? Did you just use a function as an lvalue in the return by reference example? I didn’t use the function itself as an lvalue, I used the reference that is returned by the function as an lvalue. I tried to define a dynamic array in the main() function then initialize it in another function (called after the definition): It compiles fine, but when I run it, it tells me that “anTest was not initialized”. So how can I define or initialize a dynamic array in a function then send it back to my main function? I can’t really use the Return by address because I have more than one dynamic array to define in the same function. I need to do that because the size of my arrays will be read from a file (i.e “ReadInput()”) but the arrays will be used throughout my program and therefore should belong to the main() function. Oh and except the size, all the information in those 2 arrays will also be read from the file, so I don’t know how I could define them afterwards. Great question. The problem here is that when you pass anTest to ReadInput(), the address that anTest is holding is COPIED into pn. This means that when you allocate a dynamic array into pn, you’re essentially overwriting the copied address, and the value of anTest never gets changed. What you need to do is pass the pointer by reference, so that when you assign your dynamic array to pn, you’ll actually be assigning the value to anTest, and not a copy of anTest. You can do that by changing your declaration of ReadInput to the following: Is there a way to pass something back to the main function without returning it. For example, I want to create a function that reads data from the user: arguments are number of data to be entered and type of those data (int, double, char…). So I thought I would create a dynamic array of the required size and type (by using “switch”). But how can I pass that array back to the main function? Actually the way I want to define my function there’s no “return”, it would have to be a “void”. And we can’t overload the returning type so this doesn’t help me much. Maybe a better way to do it would be to pass the dynamic array as an argument with the appropriate type so I could use function overloading. It’s kind of avoiding the issue without solving it though. Plus how can you find the length of a dynamic array? (something equivalent to “strlen” for example). Thanks, Ben If you need to pass back an object from a function with a void return type, pass it as a reference parameter. There’s no way to find the length of a dynamic array that I know of. This is always one of the things that’s bugged me about C++ (doubly so because the C++ runtime has to know the length of the array in order to delete it). Sorry I’m not clear with your answer. Could you refer me to the appropriate chapter or show me an exemple? I understand “Return by Reference” but what is “Pass back as a Reference parameter”? I’m referring to the concepts in the lesson on reference parameters. For example: Reference parameters give you the ability to change the value of the variable passed in, so you can use them to essentially return things from your function without using a return value. For example, the above function foo() returns 5 in nX. Excuse the late reply - I found this from a google search. Isn’t it more likely the c++ runtime that deletes arrays? So the compiler cannot tell how big an allocated array will be (becuase it can be allocated by a variable amount). So if you want an array you can tell the siz of, you should use an stl vector. I am glad to see a number of indian names here such as Abhishek and Prabhakar using Internet to their advantage. I have been quietly reading through your tutorials on c++ and I must admit it is one of the best!! There’s also me but i name myself afds. 😀 great tutorials by the way. According to Google Analytics, 11% of the visitors of this site are Indian. And one brazilian here, also! Alex, thanks for this excellent material. I’ve already said thanks before, but now once again. In my opinion, the best free material on c++ in the internet. Just amazing. Greetings from Rio. Is that the reason this site is being translated to Hindi? Anyway more heartfelt gratitude from India 🙂 🙂 One of our gracious readers wanted to give back to the site by doing some translation work. However, he left after getting distracted by other projects, so the Hindi translations are both stalled and getting stale. Consequently, I am considering removing them. 🙁 And one Vietnamese here! In Vietnam, specially in students and workers in IT field, your site is very famous! The best ever! Thanks a lot Alex i never touch c++ book of college subject and i only use for exam and only see question and solve myself due to this website and lot of information available here and i really appreciate for alex thanks a lot !!! thanks DEAR ALEX, your extremely fast responce to my query, by adding section 7.4a on 26th feb itself -i am overjoyed that my query deserved attention. thanks. i will persue c++ more vigorously. please extend your best wishes for me to have patience PRABHAKAR Name (required) Website
http://www.learncpp.com/cpp-tutorial/74a-returning-values-by-value-reference-and-address/comment-page-1/
CC-MAIN-2018-05
refinedweb
3,572
63.7
view raw I'm trying to get a jump start with React and like the simplicity of the create-react-app tool created by Facebook and described here: Can anybody tell me what is wrong here? I'm trying to combine it with FeatherJS and add this dependency to package.json: "feathers": "^2.0.0" import feathers from 'feathers'; Compiled with warnings. Warning in ./src/App.js /Users/nikolaschou/Dev/newbanking/payment-window/src/App.js 4:8 warning 'feathers' is defined but never used no-unused-vars ✖ 1 problem (0 errors, 1 warning) Warning in ./~/express/lib/view.js Critical dependencies: 78:29-56 the request of a dependency is an expression @ ./~/express/lib/view.js 78:29-56 You may use special comments to disable some warnings. Use // eslint-disable-next-line to ignore the next line. Use /* eslint-disable */ to ignore all warnings in a file. Judging by its documentation, feathers ifself is the server and runs on Node. On the other hand, a React app is a client-side application and runs in the browser. You can't import feathers into a browser app because it is a library intended only for the server. Note: Technically React apps can run on the server too but Create React App doesn't currently support server rendering. It also comes with many pitfalls so I recommend holding off using it until you are comfortable with React itself. Normally, with Create React App, you are expected to run your API server (which may use Feathers) separately as a Node (or any other) app. The React client would access it via AJAX or other network APIs. Your Node app would use feathers for the server, and React app would use feathers/client to communicate with it. To read about setting up a Node and a client-side React app to talk to each other, check out this tutorial and its demo.
https://codedump.io/share/6uBSh1UVv2uX/1/create-react-app-yields-39the-request-of-a-dependency-is-an-expression39-in-expresslibviewjs
CC-MAIN-2017-22
refinedweb
320
55.64
Helper class used to get/create the virtual registers that will be used to replace the MachineOperand when applying a mapping. More... #include "llvm/CodeGen/GlobalISel/RegisterBankInfo.h" Helper class used to get/create the virtual registers that will be used to replace the MachineOperand when applying a mapping. Definition at line 1009 of file RegisterBankInfo.h. Create an OperandsMapper that will hold the information to apply InstrMapping to MI. Create as many new virtual registers as needed for the mapping of the OpIdx-th operand. The number of registers is determined by the number of breakdown for the related operand in the instruction mapping. The type of the new registers is a plain scalar of the right size. The proper type is expected to be set when the mapping is applied to the instruction(s) that realizes the mapping. OpIdx-thoperand have been assigned a new virtual register. The final mapping of the instruction. Definition at line 1061 of file RegisterBankInfo.h. Referenced by llvm::applyMapping(). Definition at line 1058 of file RegisterBankInfo.h. Referenced by substituteSimpleCopyRegs(). The MachineRegisterInfo we used to realize the mapping. Definition at line 1064 of file RegisterBankInfo.h. Referenced by substituteSimpleCopyRegs(). Get all the virtual registers required to map the OpIdx-th operand of the instruction. This return an empty range when createVRegs or setVRegs has not been called. The iterator may be invalidated by a call to setVRegs or createVRegs. When ForDebug is true, we will not check that the list of new virtual registers does not contain uninitialized values. Referenced by substituteSimpleCopyRegs(). Print this operands mapper on OS stream. Set the virtual register of the PartialMapIdx-th partial mapping of the OpIdx-th operand to NewVReg. PartialMapIdx-thregister of the value mapping of the OpIdx-thoperand has been set.
https://llvm.org/doxygen/classOperandsMapper.html
CC-MAIN-2019-35
refinedweb
297
51.75
This patch depends on and. It introduces the clang-refactor tool alongside the local-rename action which is uses the existing renaming engine used by clang-rename. The tool doesn't actually perform the source transformations yet, it just provides testing support. I've moved one test from clang-rename over right now, but will move the others if the general direction of this patch is accepted. The following options are supported by clang-refactor: The testing support provided by clang-refactor is described below: When -selection=test:<file> is given, clang-refactor will parse the selection commands from that file. The selection commands are grouped and the specified refactoring action invoked by the tool. Each command in a group is expected to produce an identical result. The precise syntax for the selection commands is described in a comment for findTestSelectionRangesIn in TestSupport.h. Thanks for taking a look! Rebase on top of trunk and. Ping. One of my main concerns is still that I don't see the need for all the template magic yet :) Why doesn't everybody use the RefactoringResult we define here? Does this just test the selection? Can we use composition instead of inheritance here? In D36574#858763, @klimek wrote: One of my main concerns is still that I don't see the need for all the template magic yet :) Why doesn't everybody use the RefactoringResult we define here? This refactoring result is only really useful for the test part of clang-refactor so you can compare two results. The non-test part of clang-refactor and other clients don't really need an abstract interface for results as they will have different code for different results anyway. No, this is the moved clang-rename/Field.cpp test that tests local-rename. I will move the other tests when this patch is accepted. Get rid of inheritance of RefactoringEngine by getting rid of the class itself. @klimek, are you talking about the template usage in this patch or the whole approach in general? I can probably think of some way to reduce the template boilerplate if you are talking about the general approach. :) Ok, then I find this test really hard to read - where does it check what the symbol was replaced with? Please add documentation for the class. I appreciate your RFC that explains all these concepts; it would be nice if you could move some key concepts into the code documentation since we couldn't expect future readers to refer to the RFC. Thanks! ;) when? Can we pass RefactoringRuleContext instead of ASTRefactoringContext and get rules can get AST context from the rule context? I'm a bit nervous about having different contexts. Maybe we should have just one interface T evaluateSelection(RefactoringRuleContext, SelectionConstraint) const. I think the rule context can be useful for all rules. nit: It's a common practice to wrap implementations in namespaces instead of using using namespace. It becomes hard to manage scopes/namespaces when there are using namespaces. I think you would get a conflict of positional args when you have multiple sub-command instances. Any reason not to use clang::tooling::CommonOptionsParser which takes care of sources and compilation options for you? Do we need a FIXME here? It seems that this simply finds the first command that is available? What if there are more than one commands that satisfy the requirements? Same here. There could be potential conflict among subcommands. Also please provide more detailed description of this option; it is not obvious to users what a selection is. In D36574#860203, @klimek wrote: :) I'll work on a patch for a simpler solution as well then. It'll probably be dependent on this one though. I'm pretty sure I need some template magic to figure out how to connect inputs like selection and options and entities that produce source replacements , but I could probably get rid of most of RefactoringActionRule template magic. Oops, this isn't supposed to be in this patch, I will remove. I thought I should check for occurrences, but you're right, it's probably better to always check the source replacements (we can introduce options that apply non-default occurrences in the future which would allow us to check replacements for occurrences in comments). Would something like //CHECK: "newName" [[@LINE]]:24 -> [[]]:27 work when checking for replacements? Not from my experience, I've tried multiple actions it seems like the right arguments are parsed for the right subcommand. It looks like the cl option parser looks is smart enough to handle identical options across multiple subcommands. I don't quite follow, when would multiple subcommands be satisfied? The subcommand corresponds to the action that the user wants to do, there should be no ambiguity. Yep, I think that'd be better. Or we could just apply the replacement and check on the replaced code? (I think that's what we do in fixits for clang and clang-tidy) I agree that using CommonOptionsParser would be preferable, but right now it doesn't work well with subcommands. I will create a followup patch that improves subcommand support in CommonOptionsParser and uses them in clang-refactor when this patch is accepted. Can we get rid of this overload and simply make RefactoringRuleContext mandatory for all refactoring functions? Do we still need this overload? Should we also provide an interface to test if this has an ASTContext? Or maybe simply return null if there is ASTContext? WDYT? Should this be private? Thanks! Could you add a FIXME for the CommonOptionsParser change? (I added this comment a while ago, and it seemed to have shifted. I was intended to comment on line 297 auto It = llvm::find_if(. Sorry about that.) I guess I don't quite understand how a specific subcommand is picked based on the commandline options users provide. The llvm::find_if above seems to simply find the first action/subcommand that is registered? Could you add a comment about how submamands are matched? Removed. Done. Ah, I see. I added a comment in the updated patch that tries to explain the process. Basically we know that one action maps to just one subcommand because the actions must have unique command names. Since we've already created the subcommands, LLVM's command-line parser enables one particular subcommand that was specified in the command-line arguments. This is what this search does, we just look for the enabled subcommand that was turned on by LLVM's command-line parser, without taking any other command-line arguments into account. Shouldn't these be public? Ohh! I didn't notice RefactoringActionSubcommand inherits cl::SubCommand. Thanks for the explanation! Why return a mutable reference? Also, this is not used in this patch; maybe leave it out? Is the test selection temporary before we have true selection? I am a bit concerned about having test logic in the production code. It makes the code paths a bit complicated, and we might easily end up testing the test logic instead of the actual code logic. I'm fine with this if this is just a short term solution before we have the actual selection support. I think the ...In suffix is redundant. We could either get rid of it or use ...InFile. No, both are meant to co-exist. I guess we could introduce a new option (-selection vs -test-selection and hide -test-selection)? Another approach that I was thinking about is to have a special hidden subcommand for each action (e.g. test-local-rename). Alternatively we could use two different tools (clang-refactor vs clang-refactor-test)? Both will have very similar code though. Note that it's not the first time test logic will exist in the final tool. For example, llvm-cov has a convert-for-testing subcommand that exists in the production tool. I'm not saying that it's necessarily the best option though, but I don't think it's the worst one either ;) Address review comments I guess I am more concerned about mixing production code with testing code than having special options. For example, we have different invocation paths based on ParsedTestSelection. IMO, whether selected ranges come from test selection or actual selection should be transparent to clang-refactor. Maybe refactoring the selection handling logic out as a separate interface would help? Refactor the selection argument handling by using a SourceSelectionArgument interface with subclass for test selection and separate out the test code from the tool class. Makes sense, I just did a cleanup refactoring that should make it better. Sorry for haven't noticed this earlier, but I think clang::refactor sounds like a better name space than clang::clang_refactor. I would expect the SourceSelectionArgument to be more like a container of ranges and further decoupled with the refactoring logic. Same for the TestSelectionRangesInFile. How about an interface like this? template <typename T> void ForAllRanges(T callback) const; Do we want to consider other result types? Not sure I fully understand your interface suggestion, but I've tried to decouple some more in the updated patch. I think this is ready to go. @klimek Manuel, are all your concerns addressed? Sorry, I should've been more specific about what I had in mind, but what you have right now is close. I think this can be further decoupled a bit by having forAllRanges take in SourceManeger instead of RefactoringRuleContext since source selection should only care about source ranges. IIUC, createCustomConsumer is used to inject testing behavior into clang-refactor. I think this is fine, but please document the intention and the behavior. Thanks! Please wait for Manuel's reply before landing the patch ;-) @klimek I've added a patch that simplifies the interface and the template code -. Hopefully you'll find that the code there is cleaner. Feel free to land the patch now if comments are addressed. LG. Nice, thanks! I have to admit, the implementation here is pretty neat! :) LGTM, a few nits. +1 I'd use a completed statement in CHECK (like int /*range=*/Bar;), which is more obvious to readers. Maybe add a comment describing the following cases are named test group -- it took me a while to understand why these "invoking action" messages aren't ordered by the original source line number. nit: virtual isn't needed. I'm not a big fan of using namespace. It would make it hard to find out which namespace is the following method in -- readers have to go to the corresponding header file to find out the namespace. I'm +1 on using a more regular way namespace clang { namespace refactor {...} }. nit: virtual can be removed, the same below. Do you plan to use refactor on other files? s/clang_refactor/refactor Oops, I've put in the wrong diff link for r313025. This patch is still not committed. Probably in the future, yeah. Seems that you have attached a wrong diff, which is committed in r313025. SG. Yeah, I accidentally committed a prior patch with the link to this diff. I will commit this patch today though so it should be updated.
https://reviews.llvm.org/D36574
CC-MAIN-2021-39
refinedweb
1,859
65.62
Introduction: StatTrak Fedora (Factory New) [ARDUINO] Make your own StatTrak Fedora. This hat will be able to track how often you tip it. Although it can't detect the action, so you have to push a button everytime you want the counter on the screen to go up. (For anyone who doesn't know or hasn't guessed yet, this is based on a couple of jokes on the internet like tipping your fedora to m'lady and StatTrak from the game Counter strike : Global Offensive) This is an easy project to build in about a weekend. The Fedora that is shown in the pictures was also featured in a video by a popular youtuber (Look at 1:32) : The idea for this project was inspired by this youtuber aswell. So check out his content, because he is funny guy. We have also made a fake commercial for it: Step 1: Step 1: Aquire the Ingredients What you'll need to build the fedora is this: - Any type of fedora (preferably one with more headspace) - An Arduino Starter Kit - A piece of cardboard - Glue or a glue gun - Extension cables for LCD screen and the button If you got all these items, you're good to go! Step 2: The Code You'll need a code to make it work (obviously). Here is the code: #include <LiquidCrystal.h> LiquidCrystal lcd(12, 11, 5, 4, 3, 2); const int buttonPin = 6; const int buttonPin1 = 8; int buttonPushCounter = 0; int buttonState = 0; int lastButtonState = 0; void setup() { pinMode(buttonPin, INPUT); pinMode(buttonPin1, INPUT); lcd.begin(16,2); lcd.setCursor(0,0); lcd.print("TIPTRAK: "); } void loop() { buttonState = digitalRead(buttonPin); if (buttonState != lastButtonState) { if (buttonState == HIGH) { buttonPushCounter++; lcd.setCursor(9,0); lcd.print(buttonPushCounter); } } lastButtonState = buttonState; } Step 3: Step 3: the Wiring Layout For the wiring you'll need a: - Switch/Button - 10 Kilohm Resistor - 220 Ohm Resistor - Potentiometer - LCD Screen For a picture of the layout, check the attached pictures. Step 4: Get That Button Off the Breadboard! Well you obviously want to able to count the tips. That is why you want to mount the button to the front of the fedora. To do this you are going to need: - a breadboard connector - 2 wires (preferably 15cm or 6 inches) Now that you have aquired these parts you can start soldering. Solder the two wires to the connector. When you are done with this you can solder the other sides of the wire to the button. if you have done this right, you will be able to connect it to the breadboard were the button previously was connected. Test if the connections you have made are working by turning on the arduino and pushing the button. If the button doesn't work ,you'll need to check the connections you made with the soldering iron. Now we can get to the next step. Step 5: Now the Screen.... For the lcd screen you can actually do the same thing you did for the button, but there is an easier alternative to extend the reach of the screen. You can find premade extension wires (as shown in the picture) for the screen on sites like aliexpress or ask your local electronics shop if they have something like this. Step 6: Mount It to the Fedora Now this is the part were everything can go wrong, because you need to make holes in your beloved fedora. So take some time to measure everything before you just go crazy with your scissors. On the picture you will see were you need to cut the holes The red holes or slices are the places where you can stick the contacts (from the screen) through the fedora and the blue holes are technically not needed if you glue the screen to the fedora, but if you don't want to permanently mount it to it you can use these holes for screws or something like that. Step 7: Put Everything Together Start by mounting the button and the screen. Use hotglue where you need it. Connect the screen to the arduino and do a test fit. To mount the arduino in the fedora you will need a peace of cardboard or something similar. You can base your mount on the attached picture or fabricate your own. Always test fit and make cuts and adjustments were it is needed. When you have cut your mount to the right size ,you can attach the arduino to the mount using hotglue or something less permanent. Now you have to figure out how to power the arduino. We do recommend using a powerbank. Cut a hole so you can connect the usb cable to your arduino . After that you can mount the mount to the fedora. Use hotglue or something similar. And basically you are done. enjoy!! If you have any improvements please tell us. Recommendations We have a be nice policy. Please be positive and constructive. 13 Comments I'm thinking about making this and was curios how you reset the tiptrak Can this work for a top-hat? I want to be the best meme alive. Yeah, it should work and will probably be more wearable than the fedora, Because there is more room in the hat , I think. Can this work for a top-hat? I want to be the best meme alive. Can this work for a top-hat? I want to be the best meme alive. Can this work for a top-hat? I want to be the best meme alive. Can this work for a tophat? I want to be the best meme alive. Can this work for a top-hat? I want to be the best meme alive. Can this work for a top-hat? I want to be the best meme alive. Hi eikew thanks for making this guide and i myself is planning to make one of these but i just need to know where i can buy the materials, is there any specific website that you would suggest? thanks - Tex :)
http://www.instructables.com/id/StatTrak-Fedora-Factory-New/
CC-MAIN-2018-13
refinedweb
1,010
80.72
Django framework, used for buliding web applications quickly in a clean and efficient manner. As the size of application increases, a common issue faced by all teams is performance of the application. Measuring performance and analysis the areas of improvement is key to deliver a quality product.. Before starting load testing, we have to decide the pages which we want to test. In our case, we expect users to follow the scenario where they log in, visit different pages and submit CSRF protected forms. LocustIO helps us in emulating the users performing these tasks on our web application. Basic idea of measuring the performance is make number of request for different tasks and analysis the success and failure of those requests. Installation pip install locustio LocustIO supports python 2.x only. Currently there is no support for python 3.x. Locust File Locust file is created to simulate the actions of users of the web applications. from locust import HttpLocust, TaskSet, task class UserActions(TaskSet): def on_start(self): self.login() def login(self) # login to the application response = self.client.get('/accounts/login/') csrftoken = response.cookies['csrftoken'] self.client.post('/accounts/login/', {'username': 'username', 'password': 'password'}, headers={'X-CSRFToken': csrftoken}) @task(1) def index(self): self.client.get('/') for i in range(4): @task(2) def first_page(self): self.client.get('/list_page/') @task(3) def get_second_page(self): self.client.('/create_page/', {'name': 'first_obj'}, headers={'X-CSRFToken': csrftoken}) @task(4) def add_advertiser_api(self): auth_response = self.client.post('/auth/login/', {'username': 'suser', 'password': 'asdf1234'}) auth_token = json.loads(auth_response.text)['token'] jwt_auth_token = 'jwt '+auth_token now = datetime.datetime.now() current_datetime_string = now.strftime("%B %d, %Y") adv_name = 'locust_adv' data = {'name', current_datetime_string} adv_api_response = requests.post('', data, headers={'Authorization': jwt_auth_token}) class ApplicationUser(HttpLocust): task_set = UserActions min_wait = 0 max_wait = 0 In above example, locust file defines set of 4 tasks performed by the user - navigate to home page after login, visiting list page multiple times and submitting a form once. Parameters, min_wait and max_wait, define the wait time between different user requests. Run Locust Navigate to the directory of locustfile.py and run locust --host=<host_name> where is the URL of the application Locust instance runs locally at When a new test is started, locust web UI prompts to enter number of users to simulate and hatch rate(number of users per second). We first try to simulate 5 users with hatch rate of 1 user per second and observe the results Once a test is started, locustIO executes all tasks and results of success/failure of requests are recorded. These results are displayed in the format shown below: As seen from the example above, there is one login request and multiple requests to get a page and submit a form. Since the number of users is less there is no failover. Now let us increase the number of requests to 1000 users with hatch rate of 500 and see the results As we can that some of the requests for fetching the homepage and posting the form fail in this scenario as the number of users and requests increase. With current set of simulated users, we get failure rate of 7%. Observations: Most of the failures are in login. Some of the failures stem from the fact that application prevents multiple login from same account in short interval of time. Get request for pages has very low failure rate - 3% Post requests have lower failure rates of less than 2% We can perform multiple tests for different range of users and with the test results, it can be identified under how much stress the application is capable of performing. The result produces following data for tests Type of requests - related to each task to be simulated Name - Name of the task/request Number of requests - Total number of requests for a task Number of failures - Total number of failed requests The median, average, max and min of requests in milliseconds Content size - Size of requests data Request per second We can see the details of failed requests in Failures tab which can be used to indetify the root cause of recurring failures. LocustIO provides option to download the results in sheets, however there is no out of the box result visualization feature in form of graphs or charts. Load tests results can be viewed in JSON format at. These requests can be used as input for data visualization using different tools like Tableau, matplotlib etc. Thus we are able to determine the system performance at different endpoints in very simple and efficient way. We can expand tests to add more scenarios for more endpoints and quickly get the answers.
https://blog.apcelent.com/Load-Testing-a-Django-Application-using-LocustIO.html
CC-MAIN-2019-04
refinedweb
771
52.9
I am looking for a python function to splice an audio file (wav format) into 1 sec duration splices and store each of the new splices (of 1 sec duration ) into a new .wav file. Its real simple and easy using pydub module, details of which are over here and here pydub has a method called make_chunks to which you can specify chunk length in milliseconds. make_chunks(your_audio_file_object, chunk_length_ms) Here is a working code that splits wav file in one sec chunks. I had a 8.5 seconds file, so the program created 9 one seconds chunks that are playable. Last chunk will be smaller depending on audio duration. from pydub import AudioSegment from pydub.utils import make_chunks myaudio = AudioSegment.from_file("myAudio.wav" , "wav") chunk_length_ms = 1000 # pydub calculates in millisec chunks = make_chunks(myaudio, chunk_length_ms) #Make chunks of one sec #Export all of the individual chunks as wav files for i, chunk in enumerate(chunks): chunk_name = "chunk{0}.wav".format(i) print "exporting", chunk_name chunk.export(chunk_name, format="wav") Output Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. >>> ================================ RESTART ================================ >>> exporting chunk0.wav exporting chunk1.wav exporting chunk2.wav exporting chunk3.wav exporting chunk4.wav exporting chunk5.wav exporting chunk6.wav exporting chunk7.wav exporting chunk8.wav >>>
https://codedump.io/share/AoP6muGIe9jO/1/how-to-splice-an-audio-file-wav-format-into-1-sec-splices-in-python
CC-MAIN-2017-51
refinedweb
224
70.9
Closed Bug 1164693 Opened 5 years ago Closed 4 years ago The directional caret should point in the opposite direction of what backspace will do Categories (Core :: Layout: Text and Fonts, defect) Tracking () 2.2 S14 (12june) People (Reporter: tedders1, Assigned: tedders1) References (Depends on 1 open bug) Details Attachments (3 files, 1 obsolete file) When the pref |bidi.browser.ui| is true, the caret is replaced with a "directional" caret. Right now, this caret seems to reflect the directionality of the current paragraph, which isn't terribly useful. What would be better is if the caret reflected the directionality of the current text it's in. And if the caret is at a bidi boundary, the caret should always point in the OPPOSITE direction of the movement that would happen when backspace is pressed. That is, backspace should always move the cursor backwards relative to the direction it's pointing in. This way, the user never needs to be confused about what hitting "backspace" will do. Summary: The directional cursor should reflect the direction of the text it's in → The directional cursor should point in the opposite direction of what backspace will do Caret. I mean "caret", not cursor. Summary: The directional cursor should point in the opposite direction of what backspace will do → The directional caret should point in the opposite direction of what backspace will do > Right now, this caret seems to reflect the directionality of the current paragraph, which isn't terribly useful. My apologies. It seems that the caret currently reflects the current input language. Assignee: nobody → tclancy This patch: 1) Shows the bidi caret whenever: (a) the document is bidi; (b) the user is using an RTL keyboard; or (c) The bidi.browser.ui pref is set to true. 2) Makes the hook on the caret point reflect the caret bidi level. (I shall call this the "caret bidi direction".) 3) Did you know that when you have a bidi boundary (e.g. ABCאבג) and you click on the boundary (e.g. right between the C and the gimmel), the caret bidi direction depends on whether you clicked slightly to the right of the boundary or slightly to the left of the boundary? I didn't. And since you couldn't see the caret direction, it was really confusing. But now that you can see the caret direction, it's pretty nifty. Make sure you update to a recent version of mozilla-central before testing this. It relies on Bug 1167788, which just recently landed on mozilla-central. Oops. I mean, it depends on Bug 1067788, not Bug 1167788. (In reply to Ted Clancy [:tedders1] from comment #3) > This patch: > > 1) Shows the bidi caret whenever: (a) the document is bidi; We used to do that but people complained -- see bug 418513. The problem is that a lot of documents happen to be bidi because they have an RTL character tucked away somewhere -- a good example is any Wikipedia page that has an RTL language in the "this page in other languages" list. From a UX point of view, maybe the ideal thing to do would be to show the bidi caret automatically at bidi boundaries, but not otherwise. But I'm not sure offhand how tricky this might be to implement. How about we show the bidi caret automatically in bidi paragraphs? (Mind you, I don't know how I'd detect that the caret is in a bidi paragraph. But I'll figure something out.) > maybe the ideal thing to do would be to show the bidi caret automatically at bidi boundaries, but not > otherwise. But I'm not sure offhand how tricky this might be to implement. That's not hard. I modified the patch so that it only shows the bidi caret in paragraphs that are bidi. It does this by checking for NS_FRAME_IS_BIDI, which is set on frames that are in bidi paragraphs. Otherwise, it shows the bidi caret if the keyboard language is RTL, or if bidi.browser.ui is set, as before. Attachment #8614511 - Attachment is obsolete: true Attachment #8616322 - Flags: review?(smontagu) This patch fixes a problem where non-bidi paragraphs ending with a newline (which is most of them) were accidentally considered bidi. Attachment #8616323 - Flags: review?(smontagu) Comment on attachment 8616322 [details] [diff] [review] bug-1164693-fix-part-1.patch Review of attachment 8616322 [details] [diff] [review]: ----------------------------------------------------------------- ::: layout/base/nsCaret.cpp @@ +934,5 @@ > + int caretBidiLevel = selection->GetFrameSelection()->GetCaretBidiLevel(); > + if (caretBidiLevel & BIDI_LEVEL_UNDEFINED) { > + caretBidiLevel = NS_GET_EMBEDDING_LEVEL(aFrame); > + } > + bool isCaretRTL = caretBidiLevel % 2; Please make caretBidiLevel an nsBidiLevel, and use IS_LEVEL_RTL(caretBidiLevel) here Attachment #8616322 - Flags: review?(smontagu) → review+ Comment on attachment 8616323 [details] [diff] [review] bug-1164693-fix-part-2.patch Review of attachment 8616323 [details] [diff] [review]: ----------------------------------------------------------------- ::: layout/base/nsBidiPresUtils.cpp @@ +733,5 @@ > #endif > #endif > #endif > > + bool isNonBidi = false; Nit: Double negatives are confusing. I think it's clearer to call this isBidi (and reverse the true/false values, of course) Attachment #8616323 - Flags: review?(smontagu) → review+ A thought for future enhancement: maybe it would be useful to add a new frame property to cache the result of GetDirection: NSBIDI_LTR, NSBIDI_RTL, or NSBIDI_MIXED? I think using that would be more accurate than using NS_FRAME_IS_BIDI -- I think I've been inclined to set NS_FRAME_IS_BIDI on a "can't do any harm" basis in the past. Target Milestone: --- → 2.2 S14 (12june) Some assertions were failing during tests, and it's because nsTextFrame::GetInFlowContentLength() makes some assumptions that aren't true for preformatted blocks of text. Attachment #8625066 - Flags: review?(smontagu) Newer treeherder run: Status: NEW → RESOLVED Closed: 4 years ago status-firefox41: --- → fixed Resolution: --- → FIXED
https://bugzilla.mozilla.org/show_bug.cgi?id=1164693&amp;GoAheadAndLogIn=1
CC-MAIN-2019-47
refinedweb
938
63.49
NEM's namespaces also act as aliases for addresses. Since every namespace is associated with a specific account, it's possible to use each namespace in place of an account address by adding @ to the front. For example, the namespace crypto.news is associated with its owner's address NAWNNR-2SEDKU-YOBSKU-Q3VLZE-7WQW3D-YJ6UTE-SXOJ. If you want to send a transaction to this account, you can enter @crypto.news in the recipient's address field and the transaction will reach that address. This is useful because it makes it easy to tell whether you are really sending funds to a legitimate crypto.news address, as opposed to a 40-digit address string which could belong to anyone.
http://docs.nem.io/en/nanowallet/namespaces/alias-system
CC-MAIN-2018-47
refinedweb
119
58.18
Howdy! Dotnetsky here -- back again after an extended secret mission that I can't talk about. (I was in Phoenix, I hopped a train like in the "old days" and hoboed my way to the Native American Church to find out what is it in peyote that causes such unusual effects, and how it can be used in Martini recipes. I also wanted to find out if it can improve your coding skills. I haven't written a single line of code since I was there, so I guess it has some promise -- but I'll keep you posted.) But now I'm back, so look out. The rant is coming! Ya know, I never dreamed I'd become so popular. One of those "blogger" people even has a list of all my rants on his blog: And this, even after I clearly explained that Blogs Suck. Did a lot better job of finding all the little boogers than those two nerds here at eggheadcafe.com ever did! They keep changing the damned wallpaper on this site to the point where I can't even find anything anymore! Well, nevermind. The whole thing about any site or product is "shelf shout". Sometimes the content doesn't matter as much as the presentation, if it really looks nice. VS.NET 2005, even in BETA 2, has acquired some real shelf shout. The installation is snazzy and eclectic - it has all "socially correct" non - Caucasian people showing up on those revolving ad panes during the install, smiling and looking satisfied while you are waiting for your 11,297 new Registry entries to be added (not to mention the 6,987 Registry Entries that were still left there, like screaming little orphans on the beach after the tsunami, when you uninstalled your previous copy of VS.NET 2005 BETA 1...) But it definitely has Shelf Shout - not just the install -- the features in the IDE too -- the Help, Debugger Visualizations, and much more. A nice job. ASP.NET 2.0 is one area that has some major improvements in how the IDE works, round-trip HTML in design and code view, a built-in Cassini webserver, loads of new controls, and much more. And it also has a new Framework to go with it that supposedly has twice as many classes as Framework 1.1. Now, that's a real kick in the head-- its gonna keep me busier than a one-armed paperhanger. Dr. Dotnetsky has a pile of 50 some-odd books on DotNet 1.1. Might as well chuck 'em all, cause I'm gonna need a whole buncha new ones for Dot Net 2.0. However, I can see some real business opportunities here for a totally new kind of "book". Now here's the deal: You have a web site that presents a browsable, searchable class library browser for the entire .NET Framework 2.0. You can type in a search phrase about what you need to do, and you get back a list of links to content and articles, ordered by relevance. As you drill down, you are presented with descriptions and even code snippets that are contributed by the authors and member-readers, as well as links to the MSDN-2 url rewrite scheme of searching by namespace, e.g. : "". If it were advertiser-supported, you wouldn't need a publisher and you wouldn't need to sell any books. Just let people come on site and read the content! Makes total sense to me. But then, heh! -- what do I know? I m just a dumb geek. Hey, where's the Gyro Sauce? Nigel Shaw, who happens to be a highly experienced developer, wrote a piece about VB.NET vs. C# on Codeproject.com, but it's a "different" article. Rather than focusing on the languages, it focuses on the cultural history and differences representing the evolution of each language and the developers behind it. Actually, one of the more thought-provoking pieces I've read. Dr. Dotnetsky doesn't totally agree with Mr. Shaw, but does agree conceptually (for those who are students of DotNetskyHistory, here's a link to my last rant about VB.NET vs C# which is more technical in nature.) Not to be daunted, our own Peter Bromberg apparently has seen fit to chime in on his own "UnBlog" about VB.NET. ("Un" blog my butt! Looks like a fewkin' blog to me!) As one would expect, the majority response to Shaw's excellent piece was from apparently "affronted" VB developers, many of whom possess below room-temperature IQ's and who cannot even spell, who attacked him personally and showed (unsurprisingly) how utterly crude they can really be, all the while completely missing the obvious major point of Shaw's article which really has little to do with "putting down" VB.NET and VB.NET developers and more about explaining how, historically and culturally, things came to pass in his expert view. The fact of the matter "according to Dexter" is that there are a few really really good OOP - oriented developers in .NET who prefer to write code in Visual Basic .Net. And, almost without exception, they do so with Option Strict and Option Explicit turned on, and make every possible attempt to avoid the dependence on the Microsoft.VisualBasic namespace. They dont't write CType(mystring, Integer) or CInt(mystring) when they can write Convert.ToInt32(mystring). They don't write CType(thisObject, thatType) when they can write DirectCast(thisObject, thatType). And, they know the difference, and why you should do it. And, they use carefully constructed Try / Catch / Finally blocks, not "on error goto" which is probably the most horrible coding construct ever conceived by man on this planet, at least since Barney Rubble invented the electronic ignition system some 45,000 years ago. Dr. Dotnetsky would venture to say, moreover, that the majority of these "Expert VB.NET coders" are actually using VB.Net because they are book authors, technical presenters or teachers and they know they can make more money because if they put out their content in VB.NET instead of C#, there will be a bigger audience and more money. So be it. Microsoft created VB.NET because of marketing and money too, not because we needed a new .NET language. If we did, all of ASP.NET and the Enterprise Library wouldn't have been written solely in C#. These "expert VB.Netters" are people who are usually perfectly capable of doing all their coding in C#, and choose not to in order to be able to pay the mortgage. But, alas, they are a small minority. And let me just say one more little thing. When I need total, utter speed for intensive math and arrays, like when I am doing a managed audio codec implementation for my RTP Stack, I want to use pointer arithmetic. No way I am gonna do that with "wee bee".net. Public Class BoyAmIDim Public Function TestMeOut(ByVal strTest As String) As Boolean If (IsNumeric(strTest)) Then Return 5 Else Return -23 End If End Function End Class Don't laugh, Kiddies. Dr. Dotnetsky has actually seen code like the above and had to fix it. They were returning integers from a method whose return value was clearly marked Boolean. It's not that the integer values were wrong - that in itself proves the kind of boo-boos you can commit-- so much as the fact that in the Common Type System, a Boolean can only be either true or false, not "a number". Of course, in C# you have to write type-safe code, or it simply won't compile. There is no false luxury of "Option Strict" off, and you better believe that's the way it should be! Scott Hanselman says, "Set VB6.0=Nothing". Dr. Dotnetsky says, "Set VB.NET = VB.NOT". And Dr. Dotnetsky agrees with Nigel Shaw about the 80 - 20 rule. The majority of VB.NET coders are lazy, unwilling to learn, and a bunch of insecure crybabies. Their mean IQ is shockingly lower than that of C# developers. (Hey, if you wrote out the word "Dim" 100 times a day, your IQ would decline too). If they were mature, forward thinking developers, they would have taken the time and gone through the pain of "RTFM" and would now be coding in C#, in which unlikely case of course, this whole debacle would be moot. Above all, they certainly wouldn't be posting the kind of vulgar comment spam you see at the bottom of Mr. Shaw's fine article, whether you agree with it or not. Bottom line? If you really want high performance managed code, use C++, and forget about the morons involved in the debate, most of whom are vulgar, opinionated, and haven't a clue on either side. "Third Floor: Debugger Visualizers, Generics, Iterators and SQL Server Service Broker..." The VB.NET crowd is really seeing a different Windows Wizard dialog, because they have, by choice, stuck themselves with a driver that can't be upgraded: Surely, these "Vee-Bee" developers, who think their poop smells like Chardonnay, will help usher in an era of industry-wide innovation, huh? Anders Hejlsberg and his team are , I am sure, just sitting by the phone, waiting on pins and needles, to hear their expert advice. Note: Since this was originally posted, there have been a number of comments about in various places where it appears to miss the mark. Let me be a bit more specific about the issue as I see it: VB.NET is an "almost" first class member of the .NET language family. I say "almost" because there are certain language capabilities that it does not have. I'm not talking about things like With xyz .dothis .doThatEnd With -- Those are language features, not language capabilities. Being able to do pointer array referencing is a capability. Being able to overload operators, and so on. You can write perfectly fine CLS-Compliant code in VB.NET, provided you turn on Option Explicit and Option Strict, and REMOVE all references to namespaces with "VisualBasic" in their names. But, 90%+ of VB.NET programmers will not do this. THAT'S THE ISSUE, PERIOD! It's a cultural, NOT a language issue! You can debate which language is better based on features until your face turns blue, you still didn't get it! I've even seen one blog post where somebody tweaked IL code to the point where they could state that VB.NET was 9 clock ticks faster than C# for some particular operation, and if it is, that's WONDERFUL, but it's NOT THE ISSUE! The issue is people being encouraged, through propagation of culture, to write BAD CODE, get away with it, and not know the difference! Can I make it any plainer? On a new note, while in Phoenix with my Indian friends, Dr. Dotnetsky tried Yerba Mate for the first time. This is the infusion made from a relative of the small Holly tree leaves and is the national drink of Argentina, Uruguay and Paraguay. Millions of primarily Spanish speaking people in South and Latin America drink up to 20 cups of Yerba Mate daily. It has no known side effects, give you a wonderful energy high (it has "Mateine", a close relative of caffeine, but without the jittery side effects) and is loaded with vitamins and antioxidants. You can buy this in health food stores, it has a kind of woodsy smell and looks a bit like low - grade homegrown weed, but it has a very pleasant taste when mixed with a bit of sugar that can grow on you. Recommended. (No, I haven't tried mixing it with Vodka yet). In South America they drink it from a gourd and sip it from a metal straw called a Bombilla that has a built-in filter at the bottom to prevent the fine leaves and stems from coming into your mouth. It has no known aphrodisiac effects, other than being an excellent excuse to sit down inside with a beautiful woman under the pretense of "do you wanna see my bombilla?". Well, there will be more, but since I am just getting back into gear, wait a bit and I'll be back with the fury of a Wormhole between two Braneworlds. Cheers!
http://www.nullskull.com/articles/20050504.asp
CC-MAIN-2014-10
refinedweb
2,075
71.85
import "golang.org/x/exp/shiny/vendor/github.com/BurntSushi/xgb" Package XGB provides the X Go Binding, which is a low-level API to communicate with the core X protocol and many of the X extensions. It is *very* closely modeled on XCB, so that experience with XCB (or xpyb) is easily translatable to XGB. That is, it uses the same cookie/reply model and is thread safe. There are otherwise no major differences (in the API). Most uses of XGB typically fall under the realm of window manager and GUI kit development, but other applications (like pagers, panels, tilers, etc.) may also require XGB. Moreover, it is a near certainty that if you need to work with X, xgbutil will be of great use to you as well: This is an extremely terse example that demonstrates how to connect to X, create a window, listen to StructureNotify events and Key{Press,Release} events, map the window, and print out all events received. An example with accompanying documentation can be found in examples/create-window. package main import ( "fmt" "github.com/BurntSushi/xgb" "github.com/BurntSushi/xgb/xproto" ) func main() { X, err := xgb.NewConn() if err != nil { fmt.Println(err) return } wid, _ := xproto.NewWindowId(X) screen := xproto.Setup(X).DefaultScreen(X) xproto.CreateWindow(X, screen.RootDepth, wid, screen.Root, 0, 0, 500, 500, 0, xproto.WindowClassInputOutput, screen.RootVisual, xproto.CwBackPixel | xproto.CwEventMask, []uint32{ // values must be in the order defined by the protocol 0xffffffff, xproto.EventMaskStructureNotify | xproto.EventMaskKeyPress | xproto.EventMaskKeyRelease}) xproto.MapWindow(X, wid) for { ev, xerr := X.WaitForEvent() if ev == nil && xerr == nil { fmt.Println("Both event and error are nil. Exiting...") return } if ev != nil { fmt.Printf("Event: %s\n", ev) } if xerr != nil { fmt.Printf("Error: %s\n", xerr) } } } This is another small example that shows how to query Xinerama for geometry information of each active head. Accompanying documentation for this example can be found in examples/xinerama. package main import ( "fmt" "log" "github.com/BurntSushi/xgb" "github.com/BurntSushi/xgb/xinerama" ) func main() { X, err := xgb.NewConn() if err != nil { log.Fatal(err) } // Initialize the Xinerama extension. // The appropriate 'Init' function must be run for *every* // extension before any of its requests can be used. err = xinerama.Init(X) if err != nil { log.Fatal(err) } reply, err := xinerama.QueryScreens(X).Reply() if err != nil { log.Fatal(err) } fmt.Printf("Number of heads: %d\n", reply.Number) for i, screen := range reply.ScreenInfo { fmt.Printf("%d :: X: %d, Y: %d, Width: %d, Height: %d\n", i, screen.XOrg, screen.YOrg, screen.Width, screen.Height) } } XGB can benefit greatly from parallelism due to its concurrent design. For evidence of this claim, please see the benchmarks in xproto/xproto_test.go. xproto/xproto_test.go contains a number of contrived tests that stress particular corners of XGB that I presume could be problem areas. Namely: requests with no replies, requests with replies, checked errors, unchecked errors, sequence number wrapping, cookie buffer flushing (i.e., forcing a round trip every N requests made that don't have a reply), getting/setting properties and creating a window and listening to StructureNotify events. Both XCB and xpyb use the same Python module (xcbgen) for a code generator. XGB (before this fork) used the same code generator as well, but in my attempt to add support for more extensions, I found the code generator extremely difficult to work with. Therefore, I re-wrote the code generator in Go. It can be found in its own sub-package, xgbgen, of xgb. My design of xgbgen includes a rough consideration that it could be used for other languages. I am reasonably confident that the core X protocol is in full working form. I've also tested the Xinerama and RandR extensions sparingly. Many of the other existing extensions have Go source generated (and are compilable) and are included in this package, but I am currently unsure of their status. They *should* work. XKB is the only extension that intentionally does not work, although I suspect that GLX also does not work (however, there is Go source code for GLX that compiles, unlike XKB). I don't currently have any intention of getting XKB working, due to its complexity and my current mental incapacity to test it. auth.go conn.go cookie.go doc.go help.go sync.go xgb.go var ( // Where to log error-messages. Defaults to stderr. // To disable logging, just set this to log.New(ioutil.Discard, "", 0) Logger = log.New(os.Stderr, "XGB: ", log.Lshortfile) // ExtLock is a lock used whenever new extensions are initialized. // It should not be used. It is exported for use in the extension // sub-packages. ExtLock sync.Mutex ) var NewErrorFuncs = make(map[int]NewErrorFun) NewErrorFuncs is a map from error numbers to functions that create the corresponding error. It should not be used. It is exported for use in the extension sub-packages. var NewEventFuncs = make(map[int]NewEventFun) NewEventFuncs is a map from event numbers to functions that create the corresponding event. It should not be used. It is exported for use in the extension sub-packages. var NewExtErrorFuncs = make(map[string]map[int]NewErrorFun) NewExtErrorFuncs is a temporary map that stores error constructor functions for each extension. When an extension is initialized, each error for that extension is added to the 'NewErrorFuncs' map. It should not be used. It is exported for use in the extension sub-packages. var NewExtEventFuncs = make(map[string]map[int]NewEventFun) NewExtEventFuncs is a temporary map that stores event constructor functions for each extension. When an extension is initialized, each event for that extension is added to the 'NewEventFuncs' map. It should not be used. It is exported for use in the extension sub-packages. Errorf is just a wrapper for fmt.Errorf. Exists for the same reason that 'stringsJoin' and 'sprintf' exists. Get16 constructs a 16 bit integer from the beginning of a byte slice. Get32 constructs a 32 bit integer from the beginning of a byte slice. Get64 constructs a 64 bit integer from the beginning of a byte slice. Pad a length to align on 4 bytes. PopCount counts the number of bits set in a value list mask. Put16 takes a 16 bit integer and copies it into a byte slice. Put32 takes a 32 bit integer and copies it into a byte slice. Put64 takes a 64 bit integer and copies it into a byte slice. Sprintf is so we don't need to import 'fmt' in the generated Go files. StringsJoin is an alias to strings.Join. It allows us to avoid having to import 'strings' in each of the generated Go files. type Conn struct { DisplayNumber int DefaultScreen int SetupBytes []byte // Extensions is a map from extension name to major opcode. It should // not be used. It is exported for use in the extension sub-packages. Extensions map[string]byte // contains filtered or unexported fields } A Conn represents a connection to an X server. NewConn creates a new connection instance. It initializes locks, data structures, and performs the initial handshake. (The code for the handshake has been relegated to conn.go.) NewConnDisplay is just like NewConn, but allows a specific DISPLAY string to be used. If 'display' is empty it will be taken from os.Getenv("DISPLAY"). Examples: NewConn(":1") -> net.Dial("unix", "", "/tmp/.X11-unix/X1") NewConn("/tmp/launch-12/:0") -> net.Dial("unix", "", "/tmp/launch-12/:0") NewConn("hostname:2.1") -> net.Dial("tcp", "", "hostname:6002") NewConn("tcp/hostname:1.0") -> net.Dial("tcp", "", "hostname:6001") NewConnDisplay is just like NewConn, but allows a specific net.Conn to be used. Close gracefully closes the connection to the X server. NewCookie creates a new cookie with the correct channels initialized depending upon the values of 'checked' and 'reply'. Together, there are four different kinds of cookies. (See more detailed comments in the function for more info on those.) Note that a sequence number is not set until just before the request corresponding to this cookie is sent over the wire. Unless you're building requests from bytes by hand, this method should not be used. NewId generates a new unused ID for use with requests like CreateWindow. If no new ids can be generated, the id returned is 0 and error is non-nil. This shouldn't be used directly, and is exported for use in the extension sub-packages. If you need identifiers, use the appropriate constructor. e.g., For a window id, use xproto.NewWindowId. For a new pixmap id, use xproto.NewPixmapId. And so on. NewRequest takes the bytes and a cookie of a particular request, constructs a request type, and sends it over the Conn.reqChan channel. Note that the sequence number is added to the cookie after it is sent over the request channel, but before it is sent to X. Note that you may safely use NewRequest to send arbitrary byte requests to X. The resulting cookie can be used just like any normal cookie and abides by the same rules, except that for replies, you'll get back the raw byte data. This may be useful for performance critical sections where every allocation counts, since all X requests in XGB allocate a new byte slice. In contrast, NewRequest allocates one small request struct and nothing else. (Except when the cookie buffer is full and has to be flushed.) If you're using NewRequest manually, you'll need to use NewCookie to create a new cookie. In all likelihood, you should be able to copy and paste with some minor edits the generated code for the request you want to issue. PollForEvent returns the next event from the server if one is available in the internal queue without blocking. Note that unlike WaitForEvent, both Event and Error could be nil. Indeed, they are both nil when the event queue is empty. Sync sends a round trip request and waits for the response. This forces all pending cookies to be dealt with. You actually shouldn't need to use this like you might with Xlib. Namely, buffers are automatically flushed using Go's channels and round trip requests are forced where appropriate automatically. WaitForEvent returns the next event from the server. It will block until an event is available. WaitForEvent returns either an Event or an Error. (Returning both is a bug.) Note than an Error here is an X error and not an XGB error. That is, X errors are sometimes completely expected (and you may want to ignore them in some cases). If both the event and error are nil, then the connection has been closed. Cookie is the internal representation of a cookie, where one is generated for *every* request sent by XGB. 'cookie' is most frequently used by embedding it into a more specific kind of cookie, i.e., 'GetInputFocusCookie'. Check is used for checked requests that have no replies. It is a mechanism by which to report "success" or "error" in a synchronous fashion. (Therefore, unchecked requests without replies cannot use this method.) If the request causes an error, it is sent to this cookie's errorChan. If the request was successful, there is no response from the server. Thus, pingChan is sent a value when the *next* reply is read. If no more replies are being processed, we force a round trip request with GetInputFocus. Unless you're building requests from bytes by hand, this method should not be used. Reply detects whether this is a checked or unchecked cookie, and calls 'replyChecked' or 'replyUnchecked' appropriately. Unless you're building requests from bytes by hand, this method should not be used. Error is an interface that can contain any of the errors returned by the server. Use a type assertion switch to extract the Error structs. Event is an interface that can contain any of the events returned by the server. Use a type assertion switch to extract the Event structs. NewErrorFun is the type of function use to construct errors from raw bytes. It should not be used. It is exported for use in the extension sub-packages. NewEventFun is the type of function use to construct events from raw bytes. It should not be used. It is exported for use in the extension sub-packages. Package xgb imports 10 packages (graph). Updated 2017-06-02. Refresh now. Tools for package owners.
http://godoc.org/golang.org/x/exp/shiny/vendor/github.com/BurntSushi/xgb
CC-MAIN-2017-34
refinedweb
2,063
59.6
digitalmars.D.bugs - [Issue 12059] New: Smarter error messages when a module contains a namespace with the same name - d-bugmail puremagic.com Feb 02 2014 - d-bugmail puremagic.com Feb 02 2014 Summary: Smarter error messages when a module contains a namespace with the same name Product: D Version: D2 Platform: All OS/Version: All Status: NEW Keywords: diagnostic Severity: enhancement Priority: P2 Component: DMD AssignedTo: nobody puremagic.com ReportedBy: bearophile_hugs eml.cc --- Comment #0 from bearophile_hugs eml.cc 2014-02-02 02:35:15 PST --- This is just one example of a problem in D code that I have seen seen several times in D.learn and elsewhere: puremagic.com -------------I am having troubles to use the enum defined in the separate module. When I try to access it, I am getting "Undefined symbol" error: // CodeEnum.d enum CodeEnum { OK = 200, FAIL = 400 } unittest { auto e = CodeEnum.OK; // Works! } // Reply.d import CodeEnum; unittest { auto.e = CodeEnum.OK; // Error: undefined identifier 'OK' } What I am doing wrong? ------------- The answer that explains the problem:The module and your enum have the same name. When the compiler sees the `CodeEnum` symbol, it considers you're referring to the module. This module does not have an `OK` member, hence the error. In D, do not use the same symbol for a module and one of its inner symbols. ------------- Lot of time ago I was hit by a similar problem defining a "set.d" module with inside a "Set" struct plus a "set()" helper function. There are various ways to avoid this problem. One way is to always forbid to define the name "foo" inside the module named "foo", with an error message (Like: "Error: module foo contains a member named foo. Module members cannot shadow module names" as suggested by Philippe Sigaud). I like that idea, but it's a significant breaking change, and it looks quite restrictive. An alternative solution that is much less disruptive change is to just improve the error message. Instead of just giving "Error: undefined identifier 'OK'" a more descriptive error message can tell the programmer what the problem exactly is and how to fix: Error: undefined identifier 'OK' inside module 'CodeEnum.d'. Did you mean to use 'CodeEnum.CodeEnum.OK'? This error message does not avoid the problems, so it's less good than forbidding the name duplication, but it could be enough. -- Configure issuemail: ------- You are receiving this mail because: ------- Feb 02-02 06:07:52 PST --- The first option would break code, if someone actually wants to write code this way. So a simple diagnostic change would be preferred IMO. -- Configure issuemail: ------- You are receiving this mail because: ------- Feb 02 2014
http://www.digitalmars.com/d/archives/digitalmars/D/bugs/Issue_12059_New_Smarter_error_messages_when_a_module_contains_a_namespace_with_the_same_name_61599.html
CC-MAIN-2014-42
refinedweb
448
65.42
GoDaddy vs Reality So, recently I had to do an install for a client on GoDaddy... Hey, don't look at me like that... they had some special deal, and I just didn't care. After finding this article I wasn't satisfied... mostly because I could not use pre-1.0 Django. So, I went in search of a pure python MySQL adapter. And I found it in pyMySQL It's deliberately written to be compatible with MySQLdb. All I had to do was put into my settings: import pymysql pymysql.install_as_MySQLdb() I put it in just before the DATABASES clause, and things are working. Addendum GoDaddy are not only far behind, but they also don't give a rats arse about Python. So they don't install anything but basic python. This means no PIL -- no Image fields. However... virtualenv to the rescue! Since virtualenv ships with a functional script, you can dowload and untar it, create your venv, and go from there. What's that I hear you say? PIL needs compiling? Sure, but someone's already done the work for you. It's in the RPM. rpm2cpio PIL-*.rpm | cpio -id Then copy the PIL directory from usr/lib/python2.3/site-packages to your venv lib/python2.4/site-packages and you're away!
http://musings.tinbrain.net/blog/2011/jan/27/godaddy-vs-reality/
CC-MAIN-2018-22
refinedweb
219
78.96
Code. Collaborate. Organize. No Limits. Try it Today. This article has been provided courtesy of MSDN. During a recent project, one of the requirements was to show an animated GIF on a Microsoft® .NET Compact Framework Windows® Form. Version 1.0 of the .NET Compact Framework does not include the capability to display animated GIF files nor does it incorporate the ImageAnimator helper class from the full .NET Framework. The ImageAnimator class allows animation of an image that has time-based frames. ImageAnimator Even though it is possible to write C# code to read the animated GIF in GIF86a format, I have chosen a more simplistic and straightforward way to display animation in my program. If you open an animated GIF in the GIF editor of your choice, you will see that this file consists of a few images (frames) that follow each other: Figure 1. Animation frames These images are stored in a compressed format with information on the size, quantity and delay time between the frames. This information is read by the program that displays the animation. Many of the GIF editors allow you to extract the image frames into a sequential "storyboard" of the frames: Figure 2. Storyboard I saved this into a single bitmap file, which I later converted into GIF format as it uses less memory within the .NET Compact Framework. Now I am going to show you how to use this image to create a .NET Compact Framework-based Animation control. The way in which we're going to animate the bitmap is rather simple. It relies on the fact that when you're using an image in the .NET Compact Framework, you don't necessarily have to display the entire image you've loaded into memory. One of the overloaded methods of the graphics.DrawImage method accepts a Rectangle object as a parameter. This rectangle will be our way of framing each image in the storyboard bitmap. By moving the position of the frame rectangle, we can dynamically load a different section of the bitmap to be displayed on our form. graphics.DrawImage Rectangle We add a new class AnimateCtl into a .NET Compact Framework project and derive this class from System.Windows.Forms.Control: AnimateCtl System.Windows.Forms.Control using System; using System.Windows.Forms; using System.Drawing; using System.Drawing.Imaging; public class AnimateCtl : System.Windows.Forms.Control { // Add class implementation here } Let's add a public Bitmap property to the class that will be used to pass the bitmap from the client. Don't forget to declare a private member for the bitmap, for use within the class: public Bitmap private private Bitmap bitmap; public Bitmap Bitmap { get { return bitmap; } set { bitmap = value; { { The control we create will draw the frames by using the DrawImage method of the Graphics object retrieved from the control: DrawImage Graphics private void Draw(int iframe) { //Calculate the left location of the drawing frame int XLocation = iframe * frameWidth; Rectangle rect = new Rectangle(XLocation, 0, frameWidth, frameHeight); //Draw image graphics.DrawImage(bitmap, 0, 0, rect, GraphicsUnit.Pixel); } This method accepts the current frame number that needs to be drawn. We then create the drawing rectangle by calculating its left position. In order to implement the looping logic of the control, I've chosen to utilize the System.Windows.Forms.Timer. System.Windows.Forms.Timer Although quite a few other options exist to provide the same functionality such as System.Threading.Timer or even create a separate thread, the usage of the System.Windows.Forms.Timer proved to be a more simple and convenient approach. Let's add the following code in the control's constructor: System.Threading.Timer public AnimateCtl() { //Cache the Graphics object graphics = this.CreateGraphics(); //Instantiate the Timer fTimer = new System.Windows.Forms.Timer(); //Hook up to the Timer's Tick event fTimer.Tick += new System.EventHandler(this.timer1_Tick); } In the constructor, we cache the Graphics object from the control's instance and create a new instance of the Timer. And we should not forget to hook into the Timer's Tick event. We are ready to insert the StartAnimation method that will actually start the animation: Timer Tick StartAnimation public void StartAnimation(int frWidth, int DelayInterval, int LoopCount) { frameWidth = frWidth; //How many times to loop loopCount = LoopCount; //Reset loop counter loopCounter = 0; //Calculate the frameCount frameCount = bitmap.Width / frameWidth; frameHeight = bitmap.Height; //Resize the control this.Size(frameWidth, frameHeight); //Assign delay interval to the timer fTimer.Interval = DelayInterval; //Start the timer fTimer.Enabled = true; } This method accepts a few parameters that are very important for animation: frame width, delay interval and loop count. And don't forget the looping logic: private void timer1_Tick(object sender, System.EventArgs e) { if (loopCount == -1) //loop continuously { this.DrawFrame(); } else { if (loopCount == loopCounter) //stop the animation fTimer.Enabled = false; else this.DrawFrame(); } } private void DrawFrame() { if (currentFrame < frameCount-1) { //move to the next frame currentFrame++; } else { //increment the loopCounter loopCounter++; currentFrame = 0; } Draw(currentFrame); } In the code above, in the timer1_Tick event, we check the loopCount that keeps track of how many loops have already been drawn and compare it to the loopCounter that we captured when the StartAnimation method was called. timer1_Tick loopCount loopCounter We are done with the AnimateCtl and ready to test it in action. As a first step, we need to add the image file with "storyboard" into your project. We can do this by making this file an embedded resource or just by telling Visual Studio .NET 2003 to copy this file as a part of the project. Right click on the project in the solution explorer and select "Add Existing Item…" from the pop up menu. Browse to the image file and make sure that the Build Action property for this file is set to Content. Build Action Content Now let's insert the following into your Form's constructor: Form public Form1() { // // Required for Windows Form Designer support // InitializeComponent(); //Instantiate control animCtl = new AnimateCtl(); //Assign the Bitmap from the image file animCtl.Bitmap = new Bitmap(@"\Program Files\AnimateControl\guestbk.gif"); //Set the location animCtl.Location = new Point(50, 50); //Add to control to the Form this.Controls.Add(animCtl); } In the code above, we assign the Bitmap property of the animation control with the Bitmap object that was created from our image file. Place two buttons on your form in the designer and add the code to their click events: private void button1_Click(object sender, System.EventArgs e) { animCtl.StartAnimation(92, 100, 3); } private void button2_Click(object sender, System.EventArgs e) { animCtl.StopAnimation(); } When running the project and tapping on the "Start Animation" button, you should see the animation: Figure 3. The final product The amount of frames incorporated into the pseudo-animated GIF files could vary as well as the delay time between the frames. You would certainly need to adjust the DelayInterval parameter when calling the StartAnimation method for different animations. DelayInterval In no way is this code considered in its final version. The AnimateCtl does not provide all required functionality that could be incorporated into animated GIFs. For example, AnimateCtl control can't handle a different delay time between the frames. Such as, you might want to show the very first frame a little bit longer than others. The code provided with this article is a good starting point for you to extend this control for your needs. Please keep in mind that displaying a high-resolution graphics animation could impose a heavy load on the system resources. Be aware of the memory and resource constraints of some of the devices you could be running this code on. Don't forget to test it thoroughly and make sure that your application is not hogging up all the memory or taking up all the processor time. Although the .NET Compact Framework is a subset of the full .NET Framework, developers still have the power to create compelling user interfaces which are more attractive to end users. By utilizing available GIF editor tools and the drawing capabilities of the .NET Compact Framework, we are able to display the animation in their Smart Device
http://www.codeproject.com/Articles/5903/Creating-a-Microsoft-NET-Compact-Framework-based-A?fid=32190&df=90&mpp=10&sort=Position&spc=None&tid=2492898
CC-MAIN-2014-15
refinedweb
1,354
56.76
itext pdf itext pdf i am generating pdf using java i want to do alignment in pdf using java but i found mostly left and right alignment in a row. i want to divide single row in 4 parts then how can i do report generation - EJB report generation how to create the report in java itext pdf - Java Interview Questions itext pdf sample program to modify the dimensions of image file in itext in java HiIf you want to know deep knowledge then click here and get more information about itext pdf program. report generation report generation i need to show the report between two dates in jsp is it possible report generation report generation hi i need a code to generate reports in bar graph and pie charts.i found these codes in roseindia.in unfortunately its not working................. plz help me What is iText in Java? What is iText in Java? Hi, What is iText in Java? How to create PDF in Java? Thanks Hi, Check the tutorial: Examples of iText. Thanks generating itext pdf from java application - Java Beginners generating itext pdf from java application hi, Is there any method in page events of itext to remove page numbers from pdf generated frm.../java/itext/index.shtml report generation using jsp report generation using jsp report generation coding using jsp pdf generation. pdf generation. i want to generate the data which is stored in mysql data base in pdf format with php. how i will do Open Source PDF Open Source PDF Open Source PDF Libraries in Java iText is a library that allows you to generate PDF files on the fly...: The look and feel of HTML is browser dependent; with iText and PDF you can itext chunk itext chunk In this free tutorial we are going to tell you about chunk in iText. iText is a framework for creating pdf files in java. A Chunk Adding images in itext pdf Adding images in itext pdf Hi, How to add image in pdf file using itext? Thanks Hi, You can use following code... image in the pdf file..   itext version itext version In this program we are going to find version of the iText jar file which is using to make a pdf file through the java program. In this example we need Create PDF from java report. i want to create report in pdf file from my database in mysql. Now i... code to create pdf file from database call from java programming. thank you, Hendra Create PDF from Java import java.io.*; import java.sql. PDF Generation for unicode characters PDF Generation for unicode characters Hi, How a PDF file having unicode characters is generated in JSP regarding the pdf table using itext regarding the pdf table using itext if table exceeds the maximum width of the page how to manage Generating PDF reports - JSP-Servlet Question I am try to generate a pdf report using jsp .... i want to export values stored in sql server DB on to a pdf files so please do reply its very...Generating PDF reports Hello everyone i have submitted several about pdf file handeling about pdf file handeling can i apend something in pdf file using java program? if yes then give short code for it. You need itext api to handle pdf files. You can find the related examples from the given link Pdf Viewer Pdf Viewer How to diplay the pdf files in java panel using... = file.getName(); return filename.endsWith(".pdf"); } public String getDescription() { return "*.pdf"; } } For the above code, you Genrating Report Genrating How can i genrate the report on java by using jdbc records in our application PDF creation in JAVA - JSP-Servlet visit the following link: creation in JAVA HI! Good morning.... I want to create pdf file and i want to write something into pdf file....before creation. Upto Generate pdf file - JSP-Servlet Generate pdf file Hi Friends, How to generate the pdf file for the jsp page or in servets Hi Friend, You need to download itext...: For more information, visit the following link: convert an pdf file to html in Java convert an pdf file to html in Java Hi all, How to convert an pdf file to html in Java? Currently all my data is generated into a report in pdf... implementing this, is there any source code in java? Thanks Generating report in java Generating report in java How can I generate a report in java reading from pdf reading from pdf how can i read specific words from pdf file? Java Read pdf file import java.io.*; import java.util.*; import...) { } } } For the above code, you need to download itext api how to display pdf file on browser :// Thanks...how to display pdf file on browser In my project i have created one pdf file(by pdfwriter) into my local mechine . after that it need to display Using Jasper Report files in java projects using Eclipse IDE Using Jasper Report files in java projects using Eclipse IDE Hello all, I am using iReports for generating reports from Database. I am able to get... requires report generation. I wanted to know the procedure to use these .jasper file pdf to text pdf to text how to covert pdf file (which contain table and text) into word or excel file using itext api id generation --%> <%@ page language="java" import="java.util.*" pageEncoding... language="java" import="java.sql.*" pageEncoding="ISO-8859-1"%> <% String... KEY (sno,eid) ) 1) <%--index.jsp--%> <%@ page language="java" import pdf Table title and use pdf file in our program. Now create a file named tableTitlePDF. Remember... pdf Table title  ... to the table of the pdf file. Suppose we have one pdf file in which we have a table and we. Read PDF file Read PDF file Java provides itext api to perform read and write operations with pdf file. Here we are going to read a pdf file. For this, we have used PDFReader class. The data is first converted into bytes and then with the use jasper report - Java Beginners jasper report please tell me how to get report in jasperreport from jtable using netbeens reading from pdf to java - Java Beginners the following link: Thanks...reading from pdf to java How can i read the pdf file to strings in java. I need the methods of reading data from file and to place that Generate unicode malayalam PDF from JSP a malayalam report in PDF format.I have generated a report in jsp.Now I want the same report to be saved as PDF in unicode malayalam font.I have tried to generate PDF reports using IText,but I dont know how to generate unicode malayalam jasper report display in different formats jasper report display in different formats how to display jasper report in different formats lik csv,pdf usin jsp code Add text into the pdf File - Development process Add text into the pdf File Hi friend, How can i insert or Add text into the pdf file from the exsting one in java.. Thanks in Advance...:// Mill Studio editor Report Mill Studio editor ReportMill is the best Java application reporting tool available for dynamically generating reports and web pages from Java applications barchart generation - Java Beginners report how to take input from number of classes like order, bill,customer to another class say report to make a report on all these things Creating PDF in JAVA Creating PDF in JAVA How create pdf in java ? Take value from database for particular PDF pdf Struts - Jboss - I-Report - Struts Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I Report Time table generation some questions I am not good in java can you please tell me how it will works Access Report to Java Application - Java Beginners Access Report to Java Application Hello Sir can I connect Access Report to Java Application when i Enter Student ID in JTextField then the Appropriate Records From the Database will Be Shown in to Access Report. plz Help online shopping project report with source code in java online shopping project report with source code in java Dear Sir/Mam, i want to a project in java with source code and report project name online shopping. thank you Random Number Generation - Java Beginners Merging multiple PDF files - Framework Merging multiple PDF files I m using the iText package to merge pdf files. Its working fine but on each corner of the merged filep there is some... the files are having different font.Please help ChapterAuonumber Itext ChapterAuonumber Itext I'm new to Itext ,Please provide some example of using ChapterAutonumber in Itext Parameter month I Report - Java Beginners Parameter month I Report hy, I want to ask about how to make parameter in I Report, parameter is month from date. How to view data from i report... like Java/JSP/Servlet/JSF/Struts etc ... Thanks create pdf from jsongrid create pdf from jsongrid i need to create pdf from jsongrid in java struts2.. otherwise i need to create pdf from result set Reports to pdf conversion Reports to pdf conversion I need to convert reports into pdf format using java.. Will i get any sample code Parameter month I Report - Java Beginners Parameter month I Report ok, my problem now is in Report in java with I report. I want to give parameter month/year to my design in I Report... to I Report design. Thank's Hi friend, Code to help in solving code for image to key generation - JDBC code for image to key generation plz help me how cud i convert image into key using sha1 algorithm in java.... plz give me the code to write it... i m doing image based registration PDF to Word Conversion - Java Beginners PDF to Word Conversion Hello, Can we convert a PDF document to Microsoft word document thru Java. If its not possible in Java, is it possible in any other language Save a pdf with spring as below code Save a pdf with spring as below code c:/report.pdf All work in the same way, but never save the report to disk. How to solve this problem? please reply me
http://www.roseindia.net/tutorialhelp/comment/97325
CC-MAIN-2015-11
refinedweb
1,728
59.64
per program may be provided for each of the four implicit. For more information see new expression. Notes Per name lookup rules, any allocation function declared in class scope hides all global allocation functions. For each allocation function, at most one global replacement may be provided in the entire program, and that one replacement is automatically used by all allocations made through that function in the rest of the program, with no changes to the code. Even though placement new (overloads 5 and 6) cannot be replaced, a function with the same signature may be defined at class scope and selected by name lookup unless the new expression used ::new. Besides, global overloads that look like placement new but take a non-void pointer type as the second argument are allowed, so the code that wants to ensure that the true placement new is called (e.g. std::allocator::construct), must use ::new and cast the pointer to void*. It is not possible to place allocation function in a namespace. Parameters Return value Exceptions: *****
http://en.cppreference.com/mwiki/index.php?title=cpp/memory/new/operator_new&oldid=66771
CC-MAIN-2014-52
refinedweb
174
60.75
Yaşar Arabacı wrote: > Author of this post says that we can use mutable variables like this as > static function variables. I was wondering what are mutable variables > and what is rationale behind them. It sounds like you are reading about Python with the mind-set of a C programmer. Python is not C. Python does not have variables in the same sense that C has. Python uses a different assignment model: instead of variables with a fixed type at a fixed memory location, Python's assignment model is that you have objects, and names in namespaces. The distinction is important because you can have names without objects, objects without names, and objects with more than one name. A name without an object is a runtime error, not a compile-time error: >>> spam # A name without an object Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'spam' is not defined But an object without a name is perfectly fine. Anonymous objects are very frequent in Python, so frequent that you've probably already used them without realising. This line creates four objects but only one name: >>> t = [42, 'ham and eggs', 1.5] The name 't' is bound to the list object. The objects 42, 'ham and eggs' and 1.5 (an int, a str and a float) are anonymous -- they have no names. But you can give them names at any time: >>> breakfast = t[1] # bind a name to the object in the list >>> n = len(breakfast) # and pass it to a function or you can continue to use them anonymously: >>> n = len(t[1]) It's not clear what "variable" would mean in Python unless it refers to the combination of a name bound to an object. I often use "variable" in that sense myself, but it's a bad habit, because it can confuse people who have an idea of what a variable is that is different from what Python does. In Python, it is *objects* which are either mutable (changeable) or immutable (fixed), not names. All names are mutable in the sense that you can re-bind or delete them: >>> x = 12345 # the name x is bound to the object 12345 >>> x = "two" # and now the name is bound to a different object >>> del x # and now the name x is gone But the objects themselves are inherently either mutable or immutable, regardless of the name. You cannot change the mutability of the object by changing the assignment, only by using a different object. Consider lists and tuples, which are both sequences of objects, but lists are mutable and tuples are not: >>> items = [1, 2, 3] # The list can be changed in place. >>> items[2] = 4 >>> print(items) [1, 2, 4] So "the variable is mutable". But if we re-bind the name to a different object: >>> items = tuple(items) >>> print(items) (1, 2, 4) >>> items[2] = 8 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object does not support item assignment "the variable is immutable". Notice that it is the *object* that does not support item assignment. The *name* does not get a say about whether the object is mutable or not: the tuple will always be immutable, no matter what name it is bound to. Most of the common built-in Python objects are immutable: ints floats complex numbers strings (both Unicode and byte strings) tuples bools (True and False) None frozensets while a few are mutable: lists dicts sets Custom types created with the class statement are mutable, unless you take special efforts to make them immutable. For example, the Fraction class is written to be immutable: >>> from fractions import Fraction as F >>> f = F(1, 3) >>> f Fraction(1, 3) >>> f.denominator 3 >>> f.denominator = 5 Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: can't set attribute -- Steven
https://mail.python.org/pipermail/tutor/2011-March/082494.html
CC-MAIN-2016-30
refinedweb
650
64.95
16 May 2011 12:16 [Source: ICIS news] TOKYO (ICIS)--Japanese refiner Idemitsu Kosan resumed exporting petroleum products earlier this month and plans to cut crude oil refining volume by 13% in April-June from the same period last year, the company said on Monday. Some of Idemitsu’s oil tank facilities were damaged by the massive earthquake which hit northeast ?xml:namespace> “As a result, we were able to restore our supply system almost to the level before the earthquake by the end of March, thus we have decided to restart exports [of petroleum products],” Idemitsu added. From April to June, the company planned to refine a total of 5.6m kilolitres of crude oil at its four refineries, a decrease of 900,000 kilolitres from the year-before period, Idemitsu said. In April alone, the company reduced refining volume by 2% year on year, while it aimed to reduce May refining volume by 34%, Idemitsu said. It plans to cut the volume in June by 2% from the same time a year before, the company added. The company’s
http://www.icis.com/Articles/2011/05/16/9460249/japan-refiner-idemitsu-kosan-resumes-petroleum-products-exports.html
CC-MAIN-2014-10
refinedweb
181
54.97
I’ve been writing a whole bunch about MVC architectures – client side and server side. But I hit a problem. You see, MVC and MVVM are pretty simple concepts. Here is a typical diagram that I see when looking at MVC descriptions: It’s nice and simple. The controller loads the model and passes some form of data to the View. The problem is this – where is the user and where is the data? How do these actually interact? This is actually a key point in understanding the architecture and the place that frameworks – any framework – occupies in the architecture. I think the following is a much more representative architectural diagram: This makes more sense to me. The user submits a request to a dispatcher. The dispatcher decides which controller to pass the request to. The controller asks the adapter to give it one or more models to complete the request. In the case of MVVM, these models are transitioned into a View-Model, usually through some sort of data binding. This new model (or view-model) is passed into the View rendering system, which renders the appropriate view and kicks it back to the core dispatcher so that the dispatcher can respond to the user. It’s much more messy than the plain MVC (or MVVM) design pattern. However, it’s implementable and I can see the pieces I need to implement in order to achieve the desired web application. This architecture can be implemented on the client side or the server side and both sides have frameworks that assist. Frameworks provide some sort of functionality that allow you to ignore the boiler-plate code that inevitably comes with writing such an architecture. Most frameworks have some sort of dispatcher (normally called a router, but they do much more than that) and most frameworks have some sort of adapter logic (mostly called an ORM or Object Relational Mapper). In between, frameworks enforce a pattern for controllers, models and views that can be used to enforce consistency across the application. On the server side, I have two go-to languages – C# and JavaScript. I use ASP.NET as my framework of choice on the C# server side. I can map my visual directly to ASP.NET: - ASP.NET provides the dispatcher, with an ability to configure a route map in it’s startup class. - The Controller class can be inherited to create custom controllers - The Model is a plain-old class - The View is handled by Razor syntax - The Adapter is generally handled by Entity Framework. For server-side JavaScript, the mapping is a little more messy: - Node and ExpressJS provides the dispatcher, with a crude route mapping step - I’ve recently blogged about my Controller class that can be extended to create custom controllers - The View is handled by EJS or EmbeddedJS I haven’t really gotten into Models and Adapters, although I can see libraries such as Mongoose (for MongoDB) playing a part there. However, there are Node/Express MVC frameworks out there – I want to investigate Locomotive and SailsJS at some point, for example. On the client side, things are definitely more messy. There are a host of different frameworks – Angular, Aurelia, Ember, Knockout, Meteor, React / Flux, along with a host of others. I’ve found the TodoMVC site to have a good list of frameworks worth looking at. Some of these are being upgraded to handle ES2015 syntax, some are already there and some are never going to be there. One thing to note about frameworks. They are all opinionated. ASP.NET likes the controllers to be in a Controllers namespace. Angular likes you to use directives. Aurelia likes SystemJS and jspm. Whatever it is, you need to know those opinions and how they will affect things. The web isn’t the only place one can use frameworks. The MVC architecture is not limited to web development – it’s constant across applications of any complexity. For example, you can see MVC in WinForms, Mobile applications, Mac OSX Applications and Linux Applications. I want my application to be rendered client-side, which means I need to take a look at client-side frameworks. My working list is: I’m not going to bother with Backbone, Meteor, Knockout or any other older or smaller framework. This is my own time and I don’t want to spend a ton of time on investigation. I pretty much know what Aurelia can provide. To investigate the others I needed a small site I could implement – something that wasn’t “just another task organizer” (TodoMVC). To that end, I decided to create a three page application. - Page 1 – the home page – will get loaded initially and contain a Polymer-based carousel - Page 2 – a linked page – will load data from the Internet (the Flickr example from the Aurelia app) - Page 3 – an authenticated page – will load data from the local server if the user is authenticated In addition to the three pages, I’m going to ensure that the navigation is separated logically from the actual pages and that – where possible – the pages are written in ES2015. I want separation of concerns, so I expect the models to be completely separate from the views and controllers. Each one will be implemented on top of a Node/ExpressJS server that serves up just the stuff needed. In this way, I will be able to see the install specifics. You can see my starter project on my GitHub Repository. I hope you will enjoy this series of blog posts as I cover each framework.
https://shellmonger.com/tag/framework/
CC-MAIN-2017-13
refinedweb
931
62.68
Palindrome is a number or a string, which is same as its reverse. It will be same when read from right to left and left to right. We Use the following three methods - Using predefined methods like strrev() - Comparing string from start to end - Palindrome in number Method 1: Using predefined methods like strrev() Logic: In this method, we reverse the string, and compare the reversed string with original string. Algorithm: - Input the string - Use strrev() to reverse the string - Next, we compare both the strings using strcmp - Based on the results, we output the result Code: #include<iostream.h> #include<conio.h> #include <string.h> void main() { clrscr(); char str1[30], str2[30] ; cout<<"Enter a string : \n"; cin>>str1 strcpy(str2,str1); //copying str1 in str 2 strrev(str2); //reversing string 2 if( strcmp(str1, str2) == 0 ) //compare the original and reversed string cout<<"\n The string is a palindrome.\n"; else cout<<"\n The string is not a palindrome.\n"; getch(); } Output: Enter a string: Hello The string is not a palindrome. Method 2: Comparing string from start to end Logic: In this method, we compare the string from start to end, character by character. Algorithm: - Input the string - Define i such that i=length of string -1 - Now, run a for loop, where the pointer j has starting index and i has ending index - We then compare string[i] and string[j] characters, until i=j. - We change the flag, if character is different - In the end, we check the value of flag with predefined value, and accordingly output the result Code: #include <iostream> #include<string> using namespace std; int main() { string str; cout<<"Enter a string: "; getline(cin,str); int len=str.length(); int i=len-1; int flag=0; for (int j = 0; i<j; j++,i--) { if (str[i] != str[j]) { flag=1; cout<<"\n Entered string is not a Palindrome.\n"; break; } } if (flag==0) cout<<"\n Entered string is a Palindrome.\n"; return 0; } Output: Enter a string: AABBBBAA Entered string is a Palindrome. Method-3 Palindrome with number This method is already given in one of the examples previously. Report Error/ Suggestion
https://www.studymite.com/cpp/examples/palindrome-program-in-cpp/?utm_source=related_posts&utm_medium=related_posts
CC-MAIN-2020-05
refinedweb
362
69.21
* Peter Otten <__peter__ at web.de> [101211 03:41]: > (1) the method is spelt __getitem__ (two leading and two trailing > underscores) > > (2) the left side is a python string with legal "%(...)s"-style format > expressions. Given a format string > > > try to feed it a real dict > > print s % {"s.upper()":"OK") # should print OK > > to verify that that precondition is met. Should be print s % {"s.upper()":"OK"} ## closing brace I've never had the occasion to use assert() or any other python - shooting tools, any thoughts on that? I am dealing with a programmatically composed format string, that originates from a source (html) file It may be 1)read from the file external to the object and the source string passed into the object at instantiation. The composed string is then translated correctly and the embedded is evaluated. 2)read from the source file 'within' the object namespace. In this case, the embedded code within the composed format is *not* evaluated. And <blush> I can observe the legal "%(...)s"-style format expressions. 3)*But* (and here is the kicker), if the composed format string is then dumped to a pickle file and then loaded from that pickle file it is then evaluated. :) As near as I can see. And that suggests a workaround. Unfortunately, no error messages are generated either at runtime or by pychecker. Thanks Peter -- Tim tim at johnsons-web.com or akwebsoft.com
https://mail.python.org/pipermail/tutor/2010-December/080588.html
CC-MAIN-2014-15
refinedweb
237
68.06
One of the most exciting features of .NET Core 3.0 and C# 8.0 has been the addition of IAsyncEnumerable<T> (aka async streams). But what's so special about it? What can we do now that wasn't possible before? In this article, we'll look at what challenges IAsyncEnumerable<T> is intended to solve, how to implement it in our own applications, and why IAsyncEnumerable<T> will replace Task<IEnumerable<T>> in many situations. Life before IAsyncEnumerable<T> Perhaps the best way to illustrate why IAsyncEnumerable<T> is useful is to take a look at what challenges exist without it. Imagine we're building a data access library, and we need a method that queries a data store or API for some data. It's pretty common for that method to return Task<IEnumerable<T>>, like this: public async Task<IEnumerable<Product>> GetAllProducts() To implement the method, we typically perform some data access asynchronously, then return all the data when it's finished. The problem with this becomes more evident when we need to make multiple asynchronous calls to obtain the data. For example, our database or API could be returning data in pages, like this implementation that uses Azure Cosmos DB: public async Task<IEnumerable<Product>> GetAllProducts() { Container container = cosmosClient.GetContainer(DatabaseId, ContainerId); var iterator = container.GetItemQueryIterator<Product>("SELECT * FROM c"); var products = new List<Product>(); while (iterator.HasMoreResults) { foreach (var product in await iterator.ReadNextAsync()) { products.Add(product); } } return products; } Notice we are paging through all the results in a while loop, instantiating all the product objects, placing them into a List<Product>, and finally we return the whole thing. This is quite inefficient, especially for larger datasets. Maybe we can create a more efficient implementation by changing our method to return results one page at a time: public IEnumerable<Task<IEnumerable<Product>>> GetAllProducts() { Container container = cosmosClient.GetContainer(DatabaseId, ContainerId); var iterator = container.GetItemQueryIterator<Product>("SELECT * FROM c"); while (iterator.HasMoreResults) { yield return iterator.ReadNextAsync().ContinueWith(t => { return (IEnumerable<Product>)t.Result; }); } } The caller would consume the method like this: foreach (var productsTask in productsRepository.GetAllProducts()) { foreach (var product in await productsTask) { Console.WriteLine(product.Name); } } This implementation is more efficient, but the method now returns IEnumerable<Task<IEnumerable<Product>>>. As we can see in the calling code, it's not intuitive to understand how to invoke the method and process the data. More importantly, paging is an implementation detail of the data access method that the caller should know nothing about. IAsyncEnumerable<T> to the rescue What we really want to do is to retrieve data asynchronously from our database and stream results back to the caller as they become available. In synchronous code, a method that returns IEnumerable<T> can use the yield return statement to return each piece of data to the caller as it is returned from the database. public IEnumerable<Product> GetAllProducts() { Container container = cosmosClient.GetContainer(DatabaseId, ContainerId); var iterator = container.GetItemQueryIterator<Product>("SELECT * FROM c"); while (iterator.HasMoreResults) { foreach (var product in iterator.ReadNextAsync().Result) { yield return product; } } } However, DO NOT DO THIS! The above code turns the async database call into a blocking call and will not scale. If only we could use yield return with asynchronous methods! That hasn't been possible... until now. IAsyncEnumerable<T> was introduced in .NET Core 3 (.NET Standard 2.1). It exposes an enumerator that has a MoveNextAsync() method that can awaited. This means the producer can make asynchronous calls in between yielding results. Instead of returning a Task<IEnumerable<T>>, our method can now return IAsyncEnumerable<T> and use yield return to emit data. public async IAsyncEnumerable<Product> GetAllProducts() { Container container = cosmosClient.GetContainer(DatabaseId, ContainerId); var iterator = container.GetItemQueryIterator<Product>("SELECT * FROM c"); while (iterator.HasMoreResults) { foreach (var product in await iterator.ReadNextAsync()) { yield return product; } } } To consume the results, we need to use the new await foreach() syntax available in C# 8: await foreach (var product in productsRepository.GetAllProducts()) { Console.WriteLine(product); } This is much nicer. The method produces data as they are available. The calling code consumes the data at its own pace. IAsyncEnumerable<T> and ASP.NET Core Starting with .NET Core 3 Preview 7, ASP.NET is able to return IAsyncEnumerable<T> from an API controller action. That means we can return our method's results directly -- effectively streaming data from the database to the HTTP response. [HttpGet] public IAsyncEnumerable<Product> Get() => productsRepository.GetAllProducts(); Replacing Task<IEnumerable<T>> with IAsyncEnumerable<T> As times goes by and the adoption .NET Core 3 and .NET Standard 2.1 grows, expect to see IAsyncEnumerable<T> to be used in places where we've typically used Task<IEnumerable<T>>. I look forward to seeing libraries support IAsyncEnumerable<T>. Throughout this article, we've seen code like this for querying data using the Azure Cosmos DB 3.0 SDK: var iterator = container.GetItemQueryIterator<Product>("SELECT * FROM c"); while (iterator.HasMoreResults) { foreach (var product in await iterator.ReadNextAsync()) { Console.WriteLine(product.Name); } } Like our earlier examples, Cosmos DB's own SDK also leaks its paging implementation detail and that makes it awkward to process query results. To see what it could look like if GetItemQueryIterator<Product>() returned IAsyncEnumerable<T> instead, we can create an extension method on FeedIterator: public static class FeedIteratorExtensions { public static async IAsyncEnumerable<T> ToAsyncEnumerable<T>(this FeedIterator<T> iterator) { while (iterator.HasMoreResults) { foreach(var item in await iterator.ReadNextAsync()) { yield return item; } } } } Now we can process our query results in a much cleaner way: var products = container .GetItemQueryIterator<Product>("SELECT * FROM c") .ToAsyncEnumerable(); await foreach (var product in products) { Console.WriteLine(product.Name); } Summary IAsyncEnumerable<T> is a welcomed addition to .NET and will make for much cleaner and more efficient code in many cases. Learn more about it with these resources: Discussion (18) This is cool. Can you write something about SignalR? Thank you. Sure. Anything in particular that you want to read about? I would love to read about a current state of SignalR / SSE (Server Sent Events) / gRPC / anything else? - features, gaps, areas to apply/consider using for. What is implemented / supported in .NET Core 3 for "realtime" like apps. The use of IAsyncEnumerable in a SignalR application. Especially regarding clients communicating to each other. Thank you It's great. I love SignalR!! I would love to read about singalR Is there any way that u can show with stopwatch or with memory consumption matching, showing old vs new way is much efficient? I don't think it will be more efficient in that sense - it blocks less threads so your server can scale better (await more IO bound operations like the DB calls here) If you use this for algorithms (CPU bound operations) the runtime will probably be worse (there surely is even more overhead than the one produced by async/awaits state machines) Do you know how does it work under the hood? I mean, in your example you pass query like select * from...but, since it's lazy loaded, what's the actual query executed? How I see it is it retrieves records one by one, which is something similar to looping through IQueryable. Am I wrong? If so, how does it work exactly? I don't know exactly how this works with other DB servers but MSSQL uses so called TDS (Tabular Data Stream) protocol to deliver results which is a stream of data rows. This allows asynchronous stream processing within .Net Core applications leveraging IAsyncEnumerable<>. Can you provide an F# example? I've been looking for a way to return an asynSeq. That's awesome! Thanks for sharing! I'm new to this whole progg. stuff but I know there isn't no one out there better than Microsoft. Go .NET !!! hope I get to use this new feature one day again GO GO .NET Nice article, thanks. A thing I wonder when you are using IAsyncEnumerable in controller as you return type of a route. How do you handle the cas where you want to return a NotFoundResult when no items are in the IAsyncEnumerable. Does the new interface support on demand pull? I.e. what happens if the table contains 100 records but the client uses Take(15)? The MoveNextAsyncmethod is called 15 times. Wether or not it does something is an implementation detail. This article very nice. Useful and easy for understand ! Thank Chu ! That's great! It's very helpful. Thanks for sharing.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/dotnet/what-s-the-big-deal-with-iasyncenumerable-t-in-net-core-3-1eii
CC-MAIN-2022-21
refinedweb
1,407
51.55
(2012-27 20:30)fmronan Wrote: with 0.6.3, the script run and scan but after that every are empty. same with 0.6.4, no pictures, just one movie .avi , strange def DB_cleanup_keywords(): #cn.execute( "delete from TagsInFiles where idFile not in(select idFile from Files )")#cn.execute( "delete from Tags where idTag not in (select idTag from TagsInFiles)") (2012-03-28 01:32)TPXBMC Wrote: Hi, Does anyone know if there is a way to do an advanced search on the pictures. I haven't found a way yet. For Example: -Pictures with Tag: Vacation and Tag: Mexico -Pictures with Tag:Kids Name, Tag Birthday, Date range 2005-2008 So far the Global search does not seem to allow multiple phrases but perhaps I don't know the proper syntax. Even a view by tag within view by tag function would be good. Thanks. (2012-03-26 22:16)Xycl Wrote: Suggestion for further development: ..... 3) An advanced filter. Perhaps a wizard. Select pictures from country abc with keyword xyz containing person def. Needs 1) otherwise it's not maintainable. Any opinions? (2012-03-28 11:16)fmronan Wrote: I can't adde the 0.6.5 because the repo said problem with strings (2012-03-28 12:08)fungify Wrote: Adding a folder to exclude leads to GUI lockup for many seconds. When GUI responds again the chosen folder is not excluded. Repeating the proces does not lead to lock up, and the folder is excluded. I've tried deleting image-database, enabling debug, restarted XBMC and added a folder and two folder to exclude. Pastebin here: (2012-03-28 12:08)fungify Wrote: I've noticed problems with large number of photos as well. I've added several thousand photos, most of which are tagged with person and keywords. When selecting a person with a large number of images (>800) the addon throws an error. This does not happen when there is a smaller number of images attached to the person. Will post pastebin as soon as I've completed a rescan of the images. (2012-03-28 14:18)Xycl Wrote: I've the same problems. When selecting a directory, either to include or exclude, sometimes a progress dialog appears. Then you can be sure that it doesn't work. Perhaps someone with more Python knowledge than me can have a look at show_roots() in default.py. Currently I'm not able to find the reason for this progress dialog. (2012-03-28 14:18)Xycl Wrote: Alexsolex added a paging to the plugin. Annoyingly, this seems to be not fully implemented. You can set the max number of displayed pictures in the preference dialog of mypicdb to 100.000. Perhaps this will help. (2012-03-28 20:33)fungify Wrote: On a related note - I would like to help with the Danish translations, but I can't seem to find the strings to translate? (2012-03-28 20:33)fungify Wrote: EDIT: Can anyone confirm that adding excluded folders before scanning dramatically increases scanning time? As is - hours suddenly. Working on the fourth hour to scan my 7-8 thousand pictures for the first time - each picture taking 5-10 seconds to add to the database.
http://forum.xbmc.org/showthread.php?pid=1057991
CC-MAIN-2014-10
refinedweb
543
75.71
#include <cafe/mic.h> int MICUninit(mic_handle_t h_mic); A return value of MIC_ERROR_NONE ( 0) indicates success and any other value indicates an error. MIC_ERROR_INV_ARG is returned if an invalid handle is passed in. MIC_ERROR_NOT_INIT is returned when the MICUninit is called before MICInit. This function is the counterpart to MICInit. It will unconditionally shut down the DRC microphone driver stack. The functionality of MICClose is performed, if necessary, before uninitializing the driver stack. After being uninitialized, the ring buffer will no longer be referenced by the DRC microphone library and the application can safely free or reuse that memory. The processing performed in MICUninit is synchronous in the sense that this function will not return until the DRC microphone driver stack is shut down and quiescent. This can take tens of milliseconds although much of that time is spent waiting on synchronization objects. This function is not meant to be called from a timing sensitive thread. MICInit mic_handle_t Error Codes 2013/05/08 Automated cleanup pass. 2012/06/08 Initial version. CONFIDENTIAL
http://anus.trade/wiiu/personalshit/wiiusdkdocs/fuckyoudontguessmylinks/actuallykillyourself/AA3395599559ASDLG/av/mic/mic_uninit.html
CC-MAIN-2018-05
refinedweb
172
57.47
Nash Nash (or Nash shell) is a minimalist yet powerful shell with focus on readability and security of scripts. It is inspired by Plan9 rc shell and brings to Linux a similar approach to namespaces(7) creation. There is a nashfmt program to correctly format nash scripts in a readable manner, much like the Golang gofmt program. Contents Installation Install the nash-gitAUR package. Configuration Make sure that nash has been successfully installed issuing the command below in your current shell: $ nash λ> If it returned a lambda prompt, then everything is fine. When first executed, nash will create a ~/.nash/ directory in the user's homepath. Enter the command below to discover by yourself what is this directory: λ> echo $NASHPATH /home/username/.nash Put a file called init inside this directory to configure it. Nash only has 1 special variables: PROMPTvariable stores the unicode string used for the shell prompt. Nash default cd is a very simple alias to the builtin function chdir; you may find it odd to use. To improve your usage you can create your own cd alias. In nash you cannot create aliases by matching string to strings, but only binding function to command names. The init below creates a cd alias as example: defPROMPT = "λ> " fn cd(path) { if $path == "" { path = $HOME } chdir($path) PROMPT = "("+$path+")"+$defPROMPT setenv PROMPT } # bind the "cd" function to "cd" command name bindfn cd cd After saving the init file, simply start a new shell and now you can use cd as if it were a builtin keyword. git:(master)λ> nash λ> cd (/home/i4k)λ> cd /usr/local (/usr/local)λ> For a more elaborated cd or other aliases implementation, see the project dotnash. Organizing the init Nash scripts can be modular, but there is no concept of package. You can use the import keyword to load other files inside the current script session. For an example, see dotnash init. Configuring $PATH Inside the init put the code below (edit for your needs): path = ( "/bin" "/usr/bin" "/usr/local/bin" $HOME+"/bin" ) PATH = "" for p in $path { PATH = $PATH+":"+$p } setenv PATH Making nash your default shell See Command-line shell#Changing your default shell. Usage Keybindings The cli supports emacs and vi modes for common buffer editing. Default mode is emacs and you can change issuing: λ> set mode vi
https://wiki.archlinux.org/index.php?title=Nash&printable=yes
CC-MAIN-2019-47
refinedweb
395
61.87
By default search results are sorted according to score (relevance). In some scenarios we want to boost the score of hits based on certain criteria. One way to do that is by using the BoostMatching method. The method has two parameters, a filter expression and a double. The filter expression can be any expression that returns a filter, meaning that we can pass it the same types of expressions that we can pass to the Filter method. The second parameter is the boost factor. If we pass it 2 a hit that matches the filter and otherwise would have had a score of 0.2 will instead have a score of 0.4 and thereby be returned before a hit that has a score of 0.3 but does not match the filter. Examples Using the below code we increase the probability that a blog post about the fruit banana is sorted before a blog post about the brand "Banana Republic". searchResult = client.Search<BlogPost>() .For("Banana") .BoostMatching(x => x.Category.Match("fruit"), 2) .GetResult(); The BoostMatching method can only be called when doing a search (as in, we are not just finding all documents of a certain type or just using the Filter method) and has to be called before any method that is not related to the search query (such as Filter, Take, Skip etc). This is enforced by the fact that the For method in the above case returns a IQueriedSearch object while for instance the Filter method does not, so if it compiles it should work. The method can be called multiple times. If a hit matches several filters the boost will be accumulated. Note however that while calling the method five or ten times is fine, applying a huge number of boosts is not a good idea. For instance, we may use it to boost recently published blog posts by giving those that are published today a significant boost and those published the last 30 days a slight boost. But adding a different boost for the last 365 days will at best result in a very slow query and at worst result in an exception. Imagine we are developing a site for a car dealer and have indexed instances of the below Car class. public class Car { public string Make { get; set; } public string Model { get; set; } public string Description { get; set; } public double SalesMargin { get; set; } public bool InStock { get; set; } public bool NewModelComingSoon { get; set; } } When a visitor performs a search we will of course want to limit the results to those that match the search query and order them according to how relevant they are for the query. But we may also want to tweak the sorting a little to optimize for the car dealers business conditions. If a certain model is in stock they are able to deliver it faster to the customer and lower and receive payment faster so we might boost hits where that is true a little. var searchResult = client.Search<Car>() .For("Volvo") .BoostMatching(x => x.InStock.Match(true), 1.5) .GetResult(); Also, if the dealer has a very high margin for a certain model we could boost that a little as well. var searchResult = client.Search<Car>() .For("Volvo") .BoostMatching(x => x.InStock.Match(true), 1.5) .BoostMatching(x => x.SalesMargin.GreaterThan(0.2), 1.5) .GetResult(); Finally, if a model is about to be replaced by a newer model and the dealer has one of in stock it might be very valuable to sell it before the new model comes out and its value is decreased. So we give hits that match that criteria a significant boost. we are adapting the scoring, or the sorting, of search results according to optimize for the business. In other situations we may instead want to optimize for the user. On a site with recipes we may for instance want to boost hits for recipes depending on what we know about a logged in users allergies, previous searches or previously printed recipes. Possibilities The BoostMatching method is not limited to text searches. Meaning that we can also use it after the MoreLike method. And considering the vast number of filtering options supported, we can also use it for instance to combine it with the geographical filtering methods to boost hits close to the user.
https://world.episerver.com/documentation/Items/Developers-Guide/EPiServer-Find/8/DotNET-Client-API/Searching/Boosting-with-filters/
CC-MAIN-2019-26
refinedweb
725
67.08
>> also have heard of the RasPiO GPIO Ruler that I Kickstarted recently. Now There’s A New Front-end Called GPIO Zero Last time I visited Pi Towers, a couple of months ago, Ben Nuttall showed me some embryonic code he’d started working on to make RPi.GPIO more user-friendly for end-users. Last week he took it to Beta release and started gently letting the world know about it. It’s called GPIO Zero. It looks like an interesting development and I wanted to blog about it, so yesterday I asked him some questions about GPIO Zero. Here are the answers (Ben’s words in italics)… Why did you create GPIO Zero? Ben N: I started the project to provide simple interfaces to everyday components like LEDs and buttons, to make playing around with bits & pieces from the CamJam Edukit much more accessible. I would open up a Python shell and come up with little examples and it proved to be very fast for prototyping ideas and seeing what works – just a few lines to get going and it’s easy to take it in any direction from there. It’s only been a few weeks and it’s already become a very usable library. Who is it aimed at? Ben N: I’ve designed it with education in mind, to help teachers and kids get going with physical computing without the friction of worrying about pull ups, edges and all the setup. But I think most people will find it very handy. Does it supersede RPi.GPIO? Ben N: It will probably supersede RPi.GPIO for general use, but GPIO Zero is entirely built on RPi.GPIO so that’s not going away! When I showed Ben Croston (the author of RPi.GPIO) an early draft he said it was great – and that his library was never written to be an end-user library, just to bridge the gap providing Python access to the GPIO pins. People will be able to do most general purpose stuff with GPIO Zero, and if a component isn’t provided, they can use the underlying general InputDevice and OutputDevice classes, or they can just use RPi.GPIO. I’m sure many people will continue to use RPi.GPIO purely because that’s what all the tutorials on the web use. Hopefully GPIO Zero will catch on and people will see how much neater their code can be! I must say I’m eternally grateful to Ben for maintaining it! Not to mention all the projects I’ve used it for the last three years. Why did you choose BCM over any other pin numbering system? Ben N: We made a decision in the Foundation last year that we would always use BCM numbering for our resources. BOARD numbering might seem simpler but I’d say it leads new users to think all the pins are general purpose – and they’re not. Connect an LED to pin 11, why not connect some more to pins 1, 2, 3 and 4? Well 1 is 3V3. 2 and 4 are 5V. A lack of awareness of what the purpose of the pins is can be dangerous. Also, counting pins is a nightmare whichever way you’re counting them – you’re better off using a port label like your RasPiO Portsplus I completely agree with Ben about that. I detest counting pins and think it is an exercise best avoided altogether by having nice clear labels. That’s why I designed the above RasPiO Portsplus Board. Will all official Raspberry Pi resources use GPIO Zero in future or is it mainly for the most basic stuff? Ben N: I imagine so. I’ve drafted up the code for some of the resources to see what they would look like – and I think they become much neater and make it easier to teach the concepts without getting bogged down with the code. Some people might argue that you are abstracting away too much or making it too easy. How would you respond to that? Ben N: I run a session for teachers at Picademy and in order to get them to make a button press control the camera module, I have to teach the concept of pull-ups. Then they have to teach that to their students. It’s not intended to be an electronics lesson, and it distracts from the purpose of the code, and the objective of the exercise. You might argue that it’s good to know about pull ups and pull downs, and you’d be right – but why do I have to teach that on day one? I’ve got a button, and I’ve connected it to ground and pin 17 – now let’s make it take a picture. If you want to teach the electronics in more depth there’s plenty of scope for that – but it shouldn’t be mandatory if you’re just getting started. GPIO Zero is a Pythonic API, it’s a nice clean interface that allows you to get stuff done. That’s what high level languages and frameworks are designed to provide. Python code should be easy to write and easy to read. The API should be so simple that once you know how to do one thing, you can probably guess how to do another. It’s good to be able to say what you want code to do, rather than how to do it. Is there anything else you’d like to tell the world about GPIO Zero? Ben N: Expect a full release within a month. It’ll be pre-installed in Raspbian from then on. Until then, install the beta with pip. Check out the recipes for ideas and you’ll get the gist of how it works. If there’s anything you think should be added, just let me know on GitHub or the Google Doc. A huge thanks goes to Dave Jones (author of picamera and picraft) for his voluntary help on the project. He’s responsible for some of the cleverest aspects of the library. Thank you Ben for answering those questions. Where Can I Get It? The current Beta release and installation instructions are found here. So What’s it Like? Well for starters it’s simpler and more “natural language” in its approach. Let’s do a side by side comparison of the code needed to light an LED on GPIO25 and switch it off after 5 seconds. (Don’t try to run this code, as it contains both)… # RPi.GPIO GPIO Zero from RPi import GPIO from gpiozero import LED from time import sleep from time import sleep GPIO.setmode(GPIO.BCM) GPIO.setup(25, GPIO.OUT) led = LED(25) GPIO.output(25,1) led.on() sleep(5) sleep(5) GPIO.output(25,0) led.off() GPIO Zero removes the need to set up the GPIO ports manually. All you do is tell the program what you want to do and it handles that part for you. It also sets warnings off (a feature I never actually use, but it’s an extra line of code you had to type if you did). Natural Sounding Language But Ben’s main point in developing GPIO Zero is to create a Python GPIO interface that can be addressed with natural sounding language, which makes it very approachable for learning. Rather than having to get your head around… GPIO.setup(25, GPIO.OUT) “I want to set up GPIO25 as an output”, you have… led = LED(25) “I want to put an LED on GPIO25”, which makes more sense to a human being. Rather than… GPIO.output(25,1) “Switch our output on GPIO25 to On”, we have… led.on() “Switch on our LED” There is even a function for blink(), so you could blink an LED in 3 lines of code, which is ideal to grab the interest of a ‘small person’ within the limited time of their attention span. from gpiozero import LED led = LED(25) led.blink() In my teaching experience, if you can ‘get a result’ within a few minutes, you’ve got them hooked and interested. If it takes too long, you’re doomed! More To Come I haven’t yet had an in-depth play with GPIO Zero, but will now do so and report my findings in my next blog post. When I learned to drive, I happily drove round in a manual (stick shift) car for 20 years before I bought my first automatic. I took to an automatic very quickly and love it and I don’t think I would want to go back to manual. I suspect it may be a bit like this with GPIO Zero. There’s nothing wrong with RPi.GPIO – I’ve learned how to use it, it works well and I love it. But if you don’t need to ‘get your hands dirty under the hood’, GPIO Zero provides a nice clean way of getting the job done in a manner that is a lot easier to understand and quicker for learners. It’s another option, and it’s good to have choices. Hi Folks, thanks for the introtuction of GPIO Zero. I got a short question too: Will I be able to use it for PWM, like for servos or even ws2812b RGB LEDs? Very doubtful because it will be using RPi.GPIO’s soft PWM which is not accurate enough for those applications. You really need a hardware PWM for both of those. ServoBlaster (C library) can be made to work on the Pi, but it is a bit jittery. Hardware PWM is the way to go though. I think Adafruit do a nice Servo Hat which you can run from i2c. Hi alex, I agree to your opinion to a certain degree. Since the GPIO zero is meant to be used for beginners and educational purpose, we should try to give soft pwm a chance. I successfully cloned and adapted this project [] for a simple servo control, e.g. PI.CAM orientation – and it works fine (even with the jittery side effect) I also want to build a bigger project with more servos, so the adafruit servo hat is already ordered :-) Another idea (I gladly like to add to the git hub doc wish list) is to drive a stepper motor. And one last question about gpio zero (for this comment): is the led.blink() function working in background once set? That would be some kind of a multitasking in python, and I wonder if you would need to clean the GPIO on exit. Cheers McPeppr (mcpeppr.wordpress.com) I don’t know if something has changed since April 2013, perhaps it has? But at that time I tried RPi.GPIO softPWM as a means of controlling servos and it simply couldn’t do it. When I put the oscilloscope on the signal, it wasn’t very consistent. As far as I understand it, servos need quite a consistent signal. That was using RPi.GPIO, which GPIO Zero is based on. So, unless something was improved that I don’t yet know about (possible) I’d still say no to servo control with softPWM in Python. (Not as a matter of authority – that’s up to Ben – I merely think it won’t work). I think blink is a background thread, but Ben will be able to answer that one better than me. I’m pretty sure GPIO.cleanup is taken care of for you too. For soft PWM pigpio library is the best. I tiried RPi.GPIO, WiringPi and pigpio. The last one wins. I hope gpio zero is flexible enough to (in fgeature) support different backend libraries. I have built in support for LEDs and RGBLEDs – and could potentially add it for further components such as servos and ws2812s. I’ll add them to the wishlist for future updates! If you have any further suggestions please post to GitHub or the Google Doc! (links above) I’ve always preferred the RPIO library over all the others. It does PWM via DMA so its pretty fast/accurate. I’ve used it to control an 8 servo walking robot. It also includes simple functions for interrupts on GPIO and TCP sockets as well as a CLI for manipulating GPIO outside of python. RPIO is not maintained project and in official release it doesn’t support Raspberry Pi 2. I would not recommend it to anyone in it’s current state. By LED…could you also mean 3.3V or 5V relay?!?!? and which?!?! You could. And you could also use a general purpose output for that. I don’t think there is a relay class in GPIO Zero (yet), but there probably doesn’t need to be. I’ve found 5V relays work OK on Pi with 3V3 logic and 5V power, but usually the little boards you buy for arduino etc are 0=On, 1=Off which can be confusing until you realise that.
https://raspi.tv/2015/gpio-zero-introduction?replytocom=59637
CC-MAIN-2021-25
refinedweb
2,179
81.53
Introduction Pre-Requisites What is ASP.NET MVC? Why ASP.NET MVC? General Details Does ASP.NET MVC is the Only Option? Building the DMS using ASP.NET MVC Design considerations How you Architect a System? Main Design the DMS Building the Code Framework for our DMS Let’s Look at Our Data Access Layer Design Let's Look at Our Business Logic Layer Design Additional Part of the DMS Design Validating Business Objects Let's Look at Our Presentation Layer (User Interface) Design Dependency Injection Scalability Options of the Design Summary of the DMS Design Unit Testing Other Things to be Noted Conclusion History The Model View Controller (MVC) is an architectural pattern used in software engineering. The pattern isolates "domain logic" (the application logic for the user) from input and presentation (GUI), permitting independent developments, testing and maintenance of each. Today, the Microsoft variant of MVC called ASP.NET MVC, which is now part of .NET framework 4.1, is gaining momentum among software designers. In addition to that the enthusiasm is also high among developers to support the community with materials that speed up application development with ASP.NET MVC framework. There are many source libraries on the internet, where such contributions can be found. These free off-the-shell libraries/ components have positively influence the modern software industry in many different ways. However, as it was with any other case, this has a few negatives too. Today, unfortunately, junior designers, who are trying to do their first system in MVC, are struggling to select the right set of off-the-shell components correctly. They do worthlessly complex designs by needlessly committing them-selves to use these off-the-shell components. One needs to understand that you don't have to use everything in every (or the first) system you design. You need to use only the suited ones. But finding the suited options, when many different options are made available, is easier said than done. The stated problem is a little more complicated than previously stated as the technology providers too have competitive products. Therefore to start off, let me gives couple of such examples, where you see different means made available to achieve the same goal. Among the items listed above, you can see there are similar or more applicable options. Additionally there are other options, which have to be selected base on your requirement. These options have to be selected considering the functional and non functional requirement of your system. Once well designed, the system truly adds value to your development process. A designer with experience, selects his options the right way first time, and ends up keeping significant advantages. If you are a junior system designer, then at least be careful not to pick an awe fully wrong set of options. Such selection can make your 'software design' carrier end even before you start it. They say <emp>"Perfection (in design) is achieved not when there is nothing more to add, but rather when there is nothing more to take away." One who reads through this article will realize that an architecture is built with Microsoft ASP.NET MVC Framework, Microsoft Entity Framework, Ajax, J-Query, 'AutoMapper', Enterprise Library (Unity Application Block), 'MsTest', Log4Net etc. This design may not be the most flawless of all. But I can assure you that this is not a terribly wrong one either. It will fall within the acceptable range, and once completed, you will be able to re-use this as a framework to develop your very own (small to medium sized) web applications. In order to follow this article, you are expected to have at least 4 years or more of development experience with a sound understanding of OOP concepts and design patterns. Additionally, since our system is going to be designed with ASP.NET MVC framework, you need to have experience developing a few test applications with ASP.NET MVC framework, most preferably with Microsoft Entity Framework. If you think you have sufficient expertise, then you are best to further read through this article. If you still have not setup ASP.NET MVC, please have the items listed below installed before proceeding any further. You can also install ASP.NET MVC using Microsoft Web Platform Installer too. As stated before, ASP.NET MVC is the Microsoft variant of MVC and it is a free, Microsoft framework for developing great web applications using the Model-View-Controller pattern. It provides total control over your HTML and URLs, enables rich Ajax integration, and facilitates test driven development. Read more >> I hope that the above description also answers the famous question that some developers have, 'Is MVC a pattern or framework?'. Let me reiterate here again, just to ensure that you understand it correctly. The MVC is a pattern, as well as a framework. The MVC Pattern is being used to develop Microsoft ASP.NET MVC framework. So that first to be introduced is the pattern and then the framework. ASP.NET is not something completely new, but it is something that Microsoft has built on top of the standard ASP.NET library. It can be taken as an extension to the .NET Framework. In other words, it is something greater than standard ASP.NET or Web Forms. When you are working with Web-Form the framework itself controls plenty of variables, where as with ASP.NET MVC it does not do that. Instead the framework allows you to control them. So with ASP.NET MVC, it is left to you to decide how you want to build your system. Though I said that we need to take care of everything with ASP.NET MVC, it is not really difficult. There are many resources available on the internet to make life easier for MVC developers. Unlike before, where they complained about having to write more codes to get something done in MVC, now you can do them quicker with the support of contributions of other developers. The ‘MVC Contrib’ is one and was designed to add functionality and ease-of-use to Microsoft's ASP.NET MVC Framework. The 'MVC Contrib' is useful for developers looking to develop and test web applications on top of the ASP.NET MVC framework. In addition to that there are many new features made available with the latest version of ASP.NET MVC framework. You can learn more about this by reading “What’s New in ASP.NET MVC 2” document on the web-site. There are debates. Some designers say Web-Form is better than MVC. They say that there are many 3rd party controls already built for Web-Form. The MVC is extend-able, yet for all they say that the architecture needs you to write more codes. Additionally they also say that a well architected Web-Form application too can be just as extensible as MVC. In my view point, the decision has to be taken based on your requirements. When it is 50:50, the latest technology is the better choice. However we should not choose Web-Form simply because of that natural resistance we have for changing what we used to be. This habit of resisting the change is not something that you can practice in IT. The IT change rapidly forcing one to give up old things and move ahead with new ones. I think, the ASP.NET MVC going to have a great future. Microsoft has done it right this time with ASP.NET MVC. It is already proving to be the framework for building next generation of web applications. So why wait? If you adopt early, you will grow easier with it. If not then, you will experience the pain of becoming an out-dated developer with the next wave of technology updates. In the next part of the article, I will gradually start designing our Document Management System (DMS). That will demonstrate the application of Microsoft ASP.NET MVC framework to build a real world web application. However, before we move on to that, I need to stress-out few other important things about MVC here.. I have carefully picked a couple of options to build our DMS. This design is taking advantage of Microsoft ASP.NET MVC Framework, Microsoft Entity Framework, Ajax, J-Query, 'AutoMapper', Enterprise Library (Unity Application Block), 'MsTest', Log4Net etc. I am planning to discuss some of the important selections that I have made in later part of this article, but mostly I expect you to learn those by looking at the source code itself. I will mainly focus on our goal, which is to teach you how to establish the right architecture to build a small to mid-size web application with ASP.NET MVC framework. In that effort, I am planning touch base with all important points. I will give just enough details for an experienced developer to understand them but not anymore as I need to control the length of this Article. As the first step of architecturing our DMS (or any) system, you need to thoroughly analyze the system. This includes of fully and deeply understands its requirements, time period, cost constrains and quality requirements etc. Let me give few points that helps you understand it below... Once the system requirements are fully understood, mainly covering the questions given above, you can start the first few steps of designing a system. This is where you define the structure of the software and find the right way in which that structure provides conceptual integrity for the system. This is the development work product that gives the highest return on investment in terms of quality, schedule and cost. There are many aspects to consider in the design of a piece of software. The importance of each should reflect the goals the software is trying to achieve. Some of these aspects that I referred form Wiki-Software Design are given below... Imagine that our DMS is a complicated business problem, and then try to direct your design effort to reduce its complexity. This has to happen through abstraction and separation of concerns. This means, breaking the system in to its structural elements, architectural components, subsystems, sub-assemblies, parts or 'chunks'. This is not hard. Any experience designers can identify the components belonging to a system just by looking at its requirement document. However there are standard techniques that use to do this too. I also have written an Article on this topic, which you can find in the code project with the title "A Practical Approach to Computer Systems Design and Architecture". An experienced Architect does not need to go through every single step in the book to get a reasonable design done for a small web application. Such Architects can use their experience to speed up the process. Since I have done similar web applications before and have understood my deliverable, I am going to take the faster approach to get the initial part of our DMS design done. That will hopefully assist me to shorten the length of this article. For those who do not have experience, let me briefly mention the general steps that involved in architecturing a software below... As I said, our Document Management System (DMS) architecture is done combining the three layer architecture and Model View Controller architecture. In order to speed up the process, let me directly define the system framework of our DMS. Please follow the instructions given below to create your system framework directly in Visual Studio 2010. Once everything is completed, your solution will be looking like this. In the above diagram, the 'MvcDemo' project is the MVC Web Project. That project is created using Visual Studio's default MVC project template. In it, I decided to keep 'Views' and 'Controllers' in the same web project, while 'Models' were shifted out to another layer (you will read more about this change later). So in summary now you have a solution with one ASP.NET MVC Web Application project, and three empty console libraries together with a test project, which is automatically created for us by Visual Studio. Note: I have used the main project name ('MvcDemo') as a prefix to name the console libraries, but that is something that you can opted to follow or not. In our solution each sub system represents a project. Each project represents a meaningful portion of the main business problem. If you want to reuse one project in multiple other projects (Just like we will be doing it with 'MvcDemo.Common' project), you can do that by adding the project reference. The data access layer provides a centralized location for all calls into the database, and thus makes it easier to port the application to other database systems. There are many different options to get the Data Access Layer built quickly. In my effort to build the DAL, I would have chosen from many different model providers, for example: Or else I would have chosen any other model provider, possibly: This means that we will be using an ORM (Object Relational Mapping) tool to generate our Data Access Layer. As you can see, there are many tools, but all having their advantages and disadvantages. In order to select the right tool, you need consider your project requirements, the skills of the development team, and if an organization has standardized on a specific tool etc. However, I am not going to use the tools listed above, instead decided to use Microsoft Entity Framework (EF), which is something Microsoft has newly introduced. The ADO.NET Entity Framework (EF) is an object-relational mapping (ORM) framework for the .NET Framework. ADO.NET Entity Framework (EF) abstracts the relational (logical) schema of the data that is stored in a database and presents its conceptual schema to the application. However the conceptual model that EF created fail to meet my requirements. Therefore I had to find a work around to create the right conceptual model for this design. You will see few adjustments that I have made to achieve my requirement in later part of this Article. The selection of EF (Entity Framework) for my ASP.NET MVC front end is not a blindfolded decision. I do have few affirmations... The community has raised concerns over what has been promised and is being delivered with EF. You can read this ADO .NET Entity Framework Vote of No Confidence for more details about it.. However the second version of Entity Framework (known, somewhat confusingly, as 'Entity Framework v4' because it forms part of .NET 4.0) is available in Beta form as part of Visual Studio 2010, and has addressed many of the criticisms made of version 1. The Entity data model (which is also called the conceptual model of the database), which ADO.NET Entity Framework auto created for our DMS is given below. This has a 1:1 (one to one) mapping between the database tables and the so called conceptual models. These models or objects can be used to communicate with the respective physical data source or the tables of the database. I hope you can understand my conceptual model quite easily. The Data Access Layer project, which is given below, has three folders namely Models, Infrastructure and Interfaces (You will later notice that this project template is being reused in other projects of this system too). Outside of those folders, you find few classes such as one base class and two concrete classes. These are the main operational classes of the data access layer (DAL) project. This project template made it possible to directly access all commonly used main operational classes without worrying about navigating in to directories. "The inherent complexity of a software system is related to the problem it is trying to solve. The actual complexity is related to the size and structure of the software system as actually built. The difference is a measure of the inability to match the solution to the problem." <emp>-- Kevlin Henney, "For the sake of simplicity" (1999) Simplicity is the soul of efficiency. This does not mean that the design has to be unrealistically simple. It has to be just enough to achieve the requirements and not more or less. Additionally the consistency is also important. It can reduce the differences in between modules, thus would help to easily understand the design. Therefore the design has to be simple and consistent. While keeping these in mind, let's have a close look at our Data Access Layer (DAL) design. In the Data Access Layer (DAL) design, I thought of using a variant of repository pattern. The Repository pattern is commonly used by many enterprise designers. It is a straight forward design that gives you testable & reusable code modules. Furthermore it gives flexibility and separation of concerns between the conceptual model and business logics too. In my design there are two Repositories namely ‘DocumentRepository’ and ‘FileRepository’, which will mediates between the domain (also called Business Logic Layer) and data mapping layers (also called Data Access Layer) using domain objects (also called Business Objects or Models). In general it is recommended having one repository class for every business object of the system. Among the few folders that you see in that project, the folder named 'Interfaces' is important. So let's check inside the 'Interfaces' folder and see what interfaces that we have in it... I have used interfaces in the DAL design, and that might made you ask questions like 'why we need these interfaces?' and 'why not just have the concrete implementation alone?'. There are many reasons for having interfaces in a design. They allow implementation to be interchangeable (One common scenario is, when unit testing, you can replace the actual repository implementation with a fake one). Additionally, when carefully used, they help to better organize your design. The interface is the bare minimum needed to explain what a function or class will do. It's the lease amount of code you can write for a full declaration of functionality and usability. For this reason, it's (more) clearer for a user of your function or class to understand how the object works. The user shouldn't have to look through all of your implementations to understand what the object does. So, again defining interfaces is the more organized approach. In some instances, I have seen designers add interfaces for every class, which I thought is a too extreme design practice. In my design I have used interfaces not only to organize the design but also to interchange implementation too. Later in the article you will see how these interfaces are being used to assist 'unity application block' to inject dependencies too. You can see in the screen above that I have set of interfaces define for pagination too. The pagination (of business objects/ entities) is something that developers code in many different layers. Some code it in the UI (User Interface) layer, while others do it in BLL (Business Logic Layer) or DAL (Data Access Layer). I thought of implementing it inside DAL to avoid unwanted network round trips that would occurred otherwise in a distributed multi-tier deployment setup. Additionally when keep it inside DAL, I will have the option of using some of the built-in EF (Entity Framework) functions for pagination works. The figure below summarizes the overall DAL design with its main components and their relations. I think it is clear how the three interfaces and their concrete implementations are being defined. I think it is important for you to know how this design evolves. So let me talk about that here. Initially, I had the 'IRepository<T>' interface. That was used to implement the two specialized 'Document' and 'File' repositories. That time I noticed that the two was having many common operations. So I thought of abstractly define them with a abstract class. Thus, decided to add a new generic abstract class called 'RepositoryBase<T>'. Once all these were done, the final system had the generic 'IRepository<T>' interface right at the top, then the abstract 'RepositoryBase<T>' class, and then two concrete implementations of 'File' and 'Document' repositories. I thought it is done, and had a final look before closing-up the design, but that made me realized that the design is still having issues. So let me talk about that in the next paragraph. The design I had made it impossible to interchange implementation of either of the specialized repository. As an example, if I have a test/ fake version of the 'DocumentRepository' implemented with the name 'TestDocumentRepository' and wanted to interchange that with the 'DocumentRepository', then it is not possible with my design. So I decided to do few more adjustment to the design by introducing two specialized interfaces called 'IDocumentRepository' and 'IFileRepository'. This made it possible to fully hide the special implementations, hence gain the required interchangeability. That final change concludes my DAL design. As I said before, I used unity application block to inject dependencies to this system. The code below shows how one can use the unity application block to interchange implementations of IDocumentRepository. The Unity Application Block supports writing this code in 'global.asax' file or to use the 'web.config' to set detail to resolve dependencies. Please expect more details on this later.. //when using the actual DocumentRepository implementation .RegisterType<IDocumentRepository, DocumentRepository>() //when using the test version of the DocumenRepository implementation .RegisterType<IDocumentRepository, TestDocumentRepository>() I also realized that with few adjustments, this very same design pattern can be used in business logic layer too. So you will see the same design approach is being re-used in 'MvcDemo.Core' project too. Before we move on to the next section, let's have a look at the implementation of 'RepositoryBase<T>' class too. In the base repository implementation, I was careful not to make it necessarily complex. As you can see, it only has the most needed methods. You may also see how the dependency that this class has with the ‘IRepositoryContext’ is handled by passing the dependent class through the constructor. You will later see how this technique is used to inject dependencies across all layers. You can find more about Unity DI (Dependency Injection) here.. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data.Objects; using System.Linq.Expressions; using MvcDemo.Dal.Interfaces; using MvcDemo.Dal.Infrastructure; using MvcDemo.Dal.EntityModels; namespace MvcDemo.Dal { public abstract class RepositoryBase<T> : IRepository<T> where T: class { public RepositoryBase() : this(new DmsRepositoryContext()) { } public RepositoryBase(IRepositoryContext repositoryContext) { repositoryContext = repositoryContext ?? new DmsRepositoryContext(); _objectSet = repositoryContext.GetObjectSet<T>(); } private IObjectSet<T> _objectSet; public IObjectSet<T> ObjectSet { get { return _objectSet; } } #region IRepository Members public void Add(T entity) { this.ObjectSet.AddObject(entity); } public void Delete(T entity) { this.ObjectSet.DeleteObject(entity); } public IList<T> GetAll() { return this.ObjectSet.ToList<T>(); } public IList<T> GetAll(Expression<Func<T, bool>> whereCondition) { return this.ObjectSet.Where(whereCondition).ToList<T>(); } public T GetSingle(Expression<Func<T, bool>> whereCondition) { return this.ObjectSet.Where(whereCondition).FirstOrDefault<T>(); } public void Attach(T entity) { this.ObjectSet.Attach(entity); } public IQueryable<T> GetQueryable() { return this.ObjectSet.AsQueryable<T>(); } public long Count() { return this.ObjectSet.LongCount<T>(); } public long Count(Expression<Func<T, bool>> whereCondition) { return this.ObjectSet.Where(whereCondition).LongCount<T>(); } #endregion } } In the above code, the 'ObjectSet' is a public property that has being exposed via 'get'. That will allow outside parties to know the currently active object set of the respective repository. However when you implement the repository with a specific type, the returned 'ObjectSet' becomes predictable. This is a weakness of my design. As an example, when implementing ‘IFileRepository’ the 'ObjectSet' will guarantee to return 'IObjectSet<File>' but not anything else. So then one can argue saying that the property is not necessary to have publicly exposed. That is a very valid argument. As you can realize now, I cannot defend my design. One should be able to defend his/ her design. Therefore, I think it is acceptable to reduce the property's access level from 'public' to 'protected'. That way only the extended classes will be able to access the active object set. As I said before, you have to question your design, the questioning validate the design. If you check the code line below, you might notice something unfamiliar there. The '??' operator is not commonly used in general coding, but happened to be very useful for me here. repositoryContext = repositoryContext ?? new DmsRepositoryContext(); The '??' operator is called the null-coalescing operator and is used to define a default value for a null able value types as well as reference types. It returns the left-hand operand if it is not null; otherwise it returns the right operand. This code pattern is important when using an external module (Unity Application Block) to inject dependencies. The business logic layer is usually one of the layers in our multi-layer architecture. This part of the code contains the business logic for our application. This separation of business functions into a separate project would have the same advantages that we recognized in the Data Access Layer design. The screen below expands the 'Business Logic Layer' part of the system. In that, the folder named 'Models' has a set of 'View' and 'Edit' model classes. These classes represent the 'Models' of the MVC pattern. They are the domain/ conceptual models of our DMS's BLL (Business Logic Layer). As an example, to display a document in the front end, you can have a view-model with the name 'DocumentViewModel', with properties to display on the UI in it. These model classes have their corresponding 'Views' and 'Controllers' too. In addition to that, when a ‘View’ (Web Page) is use to edit/ update an entity, you can use a 'DocumentEditModel' to capture user inputs. That same model can use to store the definitions of its property validation requirement too (read 'Validating Business Objects' below for more details). You can further break this system to make it more flexible. But that comes with a cost. Further you break, difficult it will be to maintain your code. I thought this is just right for our requirement. Therefore based on your requirements you can decide how further you need to break the system. You can ignore the ‘Mapping’ related interfaces for now, but I will come back to that little later. As you can see in the screen above, the BLL design is somewhat similar to our Data Access Layer (DAL) design. Just like it was with DAL, I have a separate set of interfaces that define the framework for this layer. As you see above, I have 'DocumentService', which implements 'IDocumentService', and 'FileService', which implements 'IFileService'. The 'ServiceBase<T>', which is the counter part of the 'RepositoryBase<T>' of DAL, is empty yet. However with my experience I know that we are going to get functions that are common to both Document and File Service later, that could abstractly implement/ define in the 'ServiceBase<T>'. I thought it is a good practice to have an abstract type base class defines whenever you do multiple implementations out of a generic interface. This is the topmost layer of our application. The presentation layer displays information for the user. It communicates with other layers to produce results to the web browser. In general the architecture we used has several differences that I wanted to point at. We removed the 'Models' of the MVC from our web project and include that in the Business Logic Layer. We also removed the regular presentation layer of the three tier architecture and merge that with the MVC front end. This means that 'Views' and 'Controllers' of the MVC represent the Presentation Layer part of the three tiered system. As this is bit complex to explain, I have drawn the diagram below to further elaborate it. In the screen below you will see how our ASP.NET MVC web project structure looks like. The default directory structure of an ASP.NET MVC Application has 3 top-level directories: As you can probably guess, it recommend putting your Controller classes underneath the 'Controllers' directory, your view templates underneath your 'Views' directory and as you already knows we have decided to move the 'Models' out of this project. So in our project we will not use the 'Model' folder. In general ASP.NET MVC framework doesn't force you to always use this structure. Unless you have a good reason to use an alternative file layout, I'd recommend using this default layout. Dependency injection is a way of objects getting configured with their dependencies by an external entity. This may sound a bit abstract, so let me give you an example here. E.g.:- In our code you have a class called 'DmsController', which has a dependency with 'IDocumentService'. This means that when you create an object of type DmsController, you need to pass an implementation of 'IDocumentService' as a parameter. In this case think that you have two implementation of 'IDocumentService', where one is called 'DocumentService' (actual) and the other is called 'TestDocumentService' (Unit Test). Then the Unity Application Block (using a pattern called Unity Pattern) allows you to dynamically configure the right implementation to resolve the dependency that ‘DmsController’ has with 'IDocumentService'. If that is still not clear, let me show the way that I am using this application block to inject dependencies in my code. In order to use Unity Application Block, you need to have a custom controller factory defined. In the screen above, you can see a class called 'UnityControllerFactory'. I used that class as my custom controller factory. If you look at the source code of that class, you will see that it inherits from the MVC 'DefaultControllerFactory'. This means that I can use that class to replace the default MVC Controller Factory. As per the requirement of Unity Application Block, this custom class is used to set the container that carries the detail to resolve interfaces. Let me show you how the ‘Application_Start’ method of ‘Global.asax’ looks like.. protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); /*//Configure container with web.config UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); section.Configure(_container, "containerOne");*/ _container = new UnityContainer() .RegisterType<IDocumentService, DocumentService>(new ContainerControlledLifetimeManager()) .RegisterType<IFileService, FileService>() .RegisterType<IDocumentRepository, DocumentRepository>() .RegisterType<IFileRepository, FileRepository>() .RegisterType<IRepositoryContext, DmsRepositoryContext>() .RegisterType<IFormsAuthenticationService, FormsAuthenticationService>() .RegisterType<IMembershipService, AccountMembershipService>(); //Set for Controller Factory IControllerFactory controllerFactory = new UnityControllerFactory(_container); ControllerBuilder.Current.SetControllerFactory(controllerFactory); } As I said above, the 'UnityContainer' is used to register the interfaces with its respective concrete implementations. This is just one time registration hence it is advised to do it inside the 'Application_Start' method of the 'Global.asax'. The container is passed on to my custom made UnityControllerFactory. Finally, you have to use the built-in method of ‘ControllerBuilder’ to set my controller factory as the current controller factory of this application. The rest is automatic. For more details on this topic, please visit the Unity Dependency Injection IoC Screencast ..>> You can also see how my ‘UnityControllerFactory’ looks like … using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using Microsoft.Practices.Unity; using System.Web.Routing; using System.ComponentModel; namespace MvcDemo.Infrastracture.Mvc { public class UnityControllerFactory : DefaultControllerFactory { private readonly IUnityContainer _container; public UnityControllerFactory(IUnityContainer container) { _container = container; } protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType) { if (controllerType != null) { return _container.Resolve(controllerType) as IController; } return base.GetControllerInstance(requestContext, controllerType); } } } The 'DefaultControllerFactory' represents the controller factory that is registered by default. This class provides a convenient base class for developers who want to make only minor changes to controller creation. As you can see, we have created our custom controller factory with the name ‘UnityControllerFactory‘ by extending the default controller factory. Reference.. A business object used to store data. A smooth functioning of a business function depends on the validity of its associated business objects. Therefore it is essential validating business objects before them being used in business functions. As an example, let's take a business object named 'User', which has a property called 'Password'. Let's also assume that its length has to be at least 8 characters long. Then, before you storing the password, you need to do a length validation to make sure that the property meets its business requirement. In .NET there are several options available for object validations. The validation can be done at different layers of a system. Some validations can efficiently be done inside the UI (User Interface) layer. The UI layer is the faster and most efficient layer for user input validations. However there are other validations that you cannot do in UI layer due to various limitations. As an example the business logic layer is the most suitable place to validate a credit card detail. The 'Data Annotation Validator' is what we will be using in our code. That allows validation in the UI layer as well as in the BLL. Additionally, as I noted above, it is part of .NET framework too. Please refer some online materials find more about 'Data Annotation Validator'. A quick Googling brought 'this' up for me. Since we have separate projects for our 'BLL' and 'DAL', we have the option of breaking those layers in to physically tiers too. As an example you can wrap the 'MvcDemo.Core' (BLL) project with a ‘WebService’ to deploy it in a separate Application Server. You can use a ‘WebService’ proxy to communicate with the 'BLL', while keeping the 'MvcDemo' web site in the original server. The same technique can be used to further scale the system with 'DAL' project too. The 'MvcDemo.Common' project is a special one that groups all common operations that are common to any project. As examples we do exception loggings, data type conversions, null handling, encryptions etc in our common project. You can use this project in all of your company projects. All common functions that you want to re-use can be added to this project. Over time, this common project will grow in its size and have more and more classes representing more and more sharable business functions. In addition to creating that common project, you can also extend the same concept to develop domain centric platforms too. A domain centric platform will have commonly used business functions that are specific to a particular domain. In order to build platform like that, you need to select a domain that your company is providing solutions for. As an example if your company is focusing on providing financial solutions, then you can create a platform for financial applications. This can be done by selecting and adding business functions to your platform at the completion of each financial project. This approach will gradually build a platform that you can use as the base framework to develop any financial application. As you can see in the diagram above, the system has three main sections. Unit testing is very important part of programming. It can be taken as the first mode of testing done on the code. That helps developers to find bugs early. The earlier the defects were detected, the easier they can be fixed. In this project, we will be using 'Unity Application Block' together with 'MsTest' to perform unit testing. The 'Unity Application Block' allow us to develop highly loosely coupled ASP.NET MVC applications. I didn't yet complete the Unit Testing part in the source code, but I will explain you how that can be done. Look at the diagram below. This is the same diagram that I used above. You will see that I have completely removed the DAL part and have changed the 'DocumentService' to 'TestDocService'. What I have done here is that I am trying to test the 'DmsController' functions while keeping the 'DocumentService' under my control. In order to do that, I developed a new 'TestDocService' class by implementing the 'IDocumentService' interface. Instead of using dependency that the 'DocumentService' has with DAL, now I can hardcode everything inside 'TestDocService'. In other word, I can create a self sufficient service implementation with 'TestDocSercvice'. You can have few such test implementations to test various other scenarios too. As an example you can implement a 'Test' version of 'DocumentService' to test 'Pagination' related behaviors. That can have a 'GetDocumentList' method implementation that returns a 'DocumentListViewModel ' with few hundred of 'DocumentViewModel' in it. This test implementation can be used to test the pagination part of the User Interface. I found this great blog post that you can use to further read along this topic. The Exception handling is one of the most import aspects of software development. It improves the quality of the software. However there are many techniques that developers use to handle exceptions. I thought it is important to consolidating exception-handling knowledge into a set of polished best practices and patterns. In my code I decided to catch/ log exceptions in 'Service' classes. You can log the exception and wrap it in a custom exception and throw it back to the UI layer. That way UI layer has to catch the exception again and redirect user to the respective error view. However I decided to cut that part off, instead I decided to return null when there is an exception. This frankly limits your options, when it comes to showing the right error message to the user. You can see the implementation of 'DocumentService' class to see how I have handled exceptions in my code. For further readings... An effective design needs one to address each specific problem separately and uniquely than treating everything as if they have the same needs. When it comes to putting this into practice, it requires you to use your experience. I have this design, to take that as the basis for you to hopefully develop your own concepts in this area. At a later time you may need to revisit this write-up, perhaps to review and challenge your own.
https://www.codeproject.com/Articles/70061/Architecture-Guide-ASP-NET-MVC-Framework-N-tier-En?fid=1566988&df=10000&mpp=50&sort=Position&spc=Relaxed&tid=4418040
CC-MAIN-2017-30
refinedweb
6,330
56.96
I'm receiving this error in unity while trying to connect to my SQL Database. I'm assuming that it has something to do with a missing dll but I'm not sure. Can anyone explain to me what this error is indicating and what I need to do to fix it? Hi, did you fix it yet? I have the same problem. I have the same problem with unity 2019.4 when trying to connect to an SQL server with Entity Framework 6 Answer by Bunny83 · Jul 19, 2019 at 12:52 AM No, we can't tell you what's wrong since we have almost no information at all besides that stacktrace snippet as an image. First of all, are you actually trying to connect to a MsSQL server (A Microsoft SQL server, not a MySQL server)? The System.Data.SqlClient namespace is only for Microsoft SQL servers. If you're trying to connect to a MySQL server you need the MySQL connector which has it's own seperate classes and namespace. If you actually try to connect to an MsSQL server we don't know what target platform you currently target. Certain classes are only available on certain platforms. Apart from that it highly depends on what scripting backend you use (PlayerSettings) and what .NET version (also PlayerSettings). Next point is depending on the server you want to reach there are in most cases multiple ways to do the same thing. Since we have absolutely no idea what you actually did in your code we can't help you here. Finally when you actually develop a game that should run on a user's device, you never want to directly connect to an SQL server. Since the credentials for the server need to be shipped with your game, a hacker can extract those and can do whatever he wants to your database. You usually communicate through any kind of serverside interface like a php script, JavaServerPages, ASP.NET, NodeJS, etc ... Again totally unclear what you actually did or what you wanted to do. A stacktrace is pretty useless without the code that belongs to the stack6 People are following this question. When built for web player, my php doesn't return the datatable. Works just fine in standalone build and while editing. 1 Answer Multiple Cars not working 1 Answer Distribute terrain in zones 3 Answers Building a dedicated server for multiplayer 0 Answers C# How to select certain SQL data? 0 Answers
https://answers.unity.com/questions/1621858/sqlexception-systemnetsecuritynative.html
CC-MAIN-2021-10
refinedweb
416
72.16
NAME¶kid3, kid3-qt, kid3-cli - Kid3 ID3 Tagger SYNOPSIS¶ kid3 [--help | --author | --version | --license | --desktopfile FILE] [FILE...] kid3-qt [--portable] [Qt-options] [FILE...] kid3-cli [--portable] [--dbus] [-h | --help] [-c COMMAND1] [-c COMMAND2...] [FILE...] OPTIONS¶--portable FILE kid3¶--help --author --version --license --desktopfile FILE kid3-qt¶Qt-options kid3-cli¶--dbus -c -h|--help INTRODUCTION¶Kid3 is an application to edit the ID3v1 and ID3v2 tags in MP3 files in an efficient way. These tags can be edited by most MP3 players, but not in a very comfortable and efficient way. Moreover the tags in Ogg/Vorbis, Opus, DSF, FLAC, MPC, APE, MP4/AAC, MP2, Speex, TrueAudio, WavPack, WMA, WAV, AIFF files and tracker modules (MOD, S3M, IT, XM) are supported too. Kid3 does not grab nor encode MP3 files, but it is targeted to edit the ID3 tags of all files of an album in an efficient way, i.e. with as few mouse clicks and key strokes as possible. Where most other programs can edit either ID3v1 or ID3v2 tags, Kid3 has full control over both versions, can convert tags between the two formats and has access to all ID3v2 tags. Tags of multiple files can be set to the same value, e.g. the artist, album, year and genre of all files of an album typically have the same values and can be set together. If the information for the tags is contained in the file name, the tags can be automatically set from the file name. It is also possible to set the file name according to the tags found in the file in arbitrary formats. The editing task is further supported by automatic replacement of characters or substrings, for instance to remove illegal characters from filenames. Automatic control of upper and lower case characters makes it easy to use a consistent naming scheme in all tags. The tag information for full albums can be taken from gnudb.org[1], TrackType.org[2], MusicBrainz[3], Discogs[4], Amazon[5] or other sources of track lists. The import format is freely configurable by regular expressions. Please report any problems or feature requests to the author. USING KID3¶ Kid3 features¶ Example Usage¶This section describes a typical session with Kid3. Let's assume we have a directory containing MP3 files with the tracks from the album "Let's Tag" from the band "One Hit Wonder". The directory is named in the "artist - album" format, in our case One Hit Wonder - Let's Tag. The directory contains the tracks in the "track title.mp3" format, which I think is useful because the filenames are short (important when using mobile MP3 players with small displays) and in the correct order when sorted alphabetically (important when using hardware MP3 players which play the tracks in alphabetical order or in the order in which they are burnt on CD and that order is alphabetical when using mkisofs). Besides this, the artist and album information is already in the directory name and does not have to be repeated in the filename. But back to our example, the directory listing looks like this: 01 Intro.mp3 02 We Only Got This One.mp3 03 Outro.mp3 These files have no tags yet and we want to generate them using Kid3. We use Open (File menu or toolbar) and select one of the files in this directory. All files will be displayed in the file listbox. Lazy as we are, we want to use the information in the directory and file names to generate tags. Therefore we select all files, then click the To: Tag 1 button in the File section. This will set the title, artist, album and track values in all files. To set the year and genre values of all files, we keep all files selected and type in "2002" for the Date and select "Pop" from the Genre combobox. To set only these two values, their checkboxes are automatically checked and all other checkboxes are left unchecked. Now we change the selection by only selecting the first file and we see that all tags contain the correct values. The tags of the other files can be verified too by selecting them one by one. When we are satisfied with the tags, we use Save (File menu or toolbar). Selecting Create Playlist from the File menu will generate a file One Hit Wonder - Let's Tag.m3u in the directory. COMMAND REFERENCE¶ The GUI Elements¶The Kid3 GUI is separated in six sections: At the left are the file and directory listboxes, the right side contains the File, Tag 1, Tag 2 and Tag 3 sections. File List The file list contains the names of all the files in the opened directory which match the selected file name filter (typically *.mp3 *.ogg *.opus *.dsf *.flac *.mpc *.aac *.m4a *.m4b *.m4p *.mp4 *.mp2 *.spx *.tta *.wv *.wma *.wav *.aiff *.ape). A single or multiple files can be selected. To select no file, click into the empty area after the listbox entries. The selection determines the files which are affected by the operations which are available by using the buttons described below. Besides Name, also other columns Size, Type, Date Modified with file details can be displayed. Columns can be hidden by unchecking their name in the context menu of the list header. The order of the columns can be changed by drag and drop. The sort order can be toggled by clicking on the column header. At the left of the names an icon can be displayed: a disc to show that the file has been modified or information about which tags are present (V1, V2, V1V2 or NO TAG, no icon is displayed if the file has not been read in yet). Directories are displayed with a folder icon. If a directory is opened, its files are displayed in a hierarchical tree. By selecting files from subdirectories, operations can be executed on files in different directories, which is useful if the music collection is organized with a folder for each artist containing folders for albums of this artist. Clicking the right mouse button inside the file list opens a context menu with the following commands: Edit Playlist A playlist can be created empty or containing the tracks of a folder, see Create Playlist. The playlist file created in such a way can be edited by double click or using Edit from the file list context menu. A dialog with the entries of the playlist is shown. It is possible to open multiple playlists simultaneously. New entries can be added by drag and drop from the file list, a file manager or another playlist. If an entry is dragged from another playlist, it will be moved or copied depending on the system. To invoke the other operation, respectively, the Shift, Ctrl or Alt (to copy instead of move on macOS) key has to be pressed. Reordering entries within the playlist is also possible via drag and drop. Alternatively, entries can be moved using the keyboard shortcuts Ctrl+Shift+Up and Ctrl+Shift+Down (on macOS Command has to be pressed instead of Ctrl). An entry can be removed using the Delete key. Please note the following: To drag entries from the file list, they have to be held at the left side (near the icons), the same gesture at the right side will perform a multi selection, such an action is hereby still easily possible. When a playlist has been modified, the changes can be stored using Save or discarded using Cancel. When the window is closed, a confirmation prompt is shown if there are unsaved changes. Tracks selected in a playlist will be automatically selected in the file list, thereby making it possible to edit their tags. To execute actions on a playlist, its file must be selected in the file list. Edit from the context menu will lead to the dialog described in this section, and Play will start the media player with the tracks from the playlist. User actions can act on playlists, for example Export Playlist Folder, which copies the files from a playlist into a folder. Directory List The directory list contains the names of the directories in the opened directory, as well as the current (.) and the parent (..) directory. It allows one to quickly change the directory without using the Open... command or drag and drop. Column visibility, order and sorting can be configured as described in the section about the file list. File Shows information about the encoding (MP3, Ogg, Opus, DSF, FLAC, MPC, APE, MP2, MP4, AAC, Speex, TrueAudio, WavPack, WMA, WAV, AIFF), bit rate, sample rate, channels and the length of the file. The Name line edit contains the name of the file (if only a single file is selected). If this name is changed, the file will be renamed when the Save command is used. The Format combo box and line edit contains the format to be used when the filename is generated from the first or the second tag. The filename can contain arbitrary characters, even a directory part separated by a slash from the file name, but that directory must already exist for the renaming to succeed. The following special codes are used to insert tag values into the filename: The format codes are not restricted to the examples given above. Any frame name can be used, for instance unified frame names like %{albumartist}, %{discnumber.1}, %{bpm} or format specific names like %{popm}. It is possible to prepend and append strings to the replacement for a format code by adding them in double quotes inside the curly braces of a format code. These strings will only be put into the resulting string if the format code yields a nonempty value. For example, if the file name shall both contain the title and the subtitle, one could use %{title} [%{subtitle}] in the format string. But this would result in a string ending with [] if no subtitle frame exists for a file. In order to omit the brackets if no subtitle is present, %{title}%{" ["subtitle"]"} shall be used instead. This will omit the brackets, the leading space and the subtitle if not subtitle exists. The list of available formats can be edited in the dialog which appears when clicking the Filename from tag button in the File tab of the settings. A second Format combo box (with arrow down) is used to generate the tags from the filename. If the format of the filename does not match this pattern, a few other commonly used formats are tried. Some commonly used filename formats are already available in the combo box, but it is also possible to type in some special format into the line edit. The list of available formats can be edited in the dialog which appears when clicking the Tag from filename button in the File tab of the settings. Internally, a regular expression is built from the format codes. If advanced regular expressions are required, the format to generate the tags from the filenames can be given as a complete regular expression with captures which are preceded by the format codes, e.g. to extract the track numbers without removal of leading zeros, a format like "/%{track}(\d+) %{title}(.*)" could be used. From: Tag 1, Tag 2: Sets the filename using the selected format and the first tag or the second tag, respectively. To: Tag 1, Tag 2: The tags are set from the filename. First, the format specified in Format is used. If the existing filename does not match this format, the following formats are tried: If a single file is selected, the GUI controls are filled with the values extracted from the filename. If multiple files are selected, the tags of the files are directly set according to the filenames. Tag 1 The line edit widgets for Title, Artist, Album, Comment, Date, Track Number and Genre are used to edit the corresponding value in the first tag of the selected files. The value will be changed when the file selection is altered or before operations like Save and Quit and when the corresponding check box at the left of the field name is checked. This is useful to change only some values and leave the other values unchanged. If a single file is selected, all check boxes are checked and the line edit widgets contain the values found in the tags of this file. If a tag is not found in the file, the corresponding empty value is displayed, which is an empty string for the Title, Artist, Album and Comment line edits, 0 for the numerical Date and Track Number edits and an empty selected value for the Genre combo box. The values can be changed and if the corresponding check box is checked, they will be set for the selected file after the selection is changed. The file is then marked as modified by a disk symbol in the file listbox but remains unchanged until the Save command is used. If multiple files are selected, only the values which are identical in all selected files are displayed. In all other controls, the empty values as described above are displayed. All check boxes are unchecked to avoid unwanted changes. If a value has to be set for all selected files, it can be edited and the checkbox has to be set. The values will be set for all selected files when the selection is changed and can be saved using the Save command. The check boxes also control the operation of most commands affecting the tags, such as copy, paste and transfer between tags 1 and 2. To make it easier to use with multiple files where all check boxes are unchecked, these commands behave in the same way when all check boxes are checked and when all check boxes are unchecked. From Tag 2: The tag 1 fields are set from the corresponding values in tag 2. If a single file is selected, the GUI controls are filled with the values from tag 2. If multiple files are selected, the tags of the files are directly set. Copy: The copy buffer is filled with the Tag 1 values. Only values with checked checkbox will be used in subsequent Paste commands. Paste: Pastes the values from the copy buffer into the GUI controls. Remove: This will set all GUI controls to their empty values which results in removing all values. The saved file will then contain no tag 1. Tag 2 The GUI controls function in the same way as described for the Tag 1 section, but the size of the strings is not limited. For the tag 2 Genre you can also use your own names besides the genres listed in the combo box, just type the name into the line edit. The tag 2 can not only contain the same values as the tag 1, the format is built in a flexible way from several frames which are themselves composed of several fields. The tag 2 table shows all the frames which are available in the selected file. Edit: This will open a window which allows one to edit all fields of the selected frame. If multiple files are selected, the edited fields are applied to all selected files which contain such a frame. Add: A requester to select the frame type will appear and a frame of the selected type can be edited and added to the file. This works also to add a frame to multiple selected files. Delete: Deletes the selected frame in the selected files. Drag album artwork here is shown if the file does not contain embedded cover art. A picture can be added using drag and drop from a browser or file manager and will be displayed here. Picture frames can be edited or added by double clicking on this control. Tag 3 Some files can have more than two tags, and a third tag section is visible. The following file types can have such a Tag 3 section: The GUI controls work in the same way as in the Tag 2 section. Synchronized Lyrics and Event Timing Codes For information synchronized with the audio data, a specific editor is available. These frames are supported for ID3v2.3.0 and ID3v2.4.0 tags. To add such a frame, the specific frame name has to be selected in the list which appears when the Add button is clicked - Synchronized Lyrics or Event Timing Codes, respectively. The editor is the same for both types, for the event timing codes, only a predefined set of events is available whereas for the synchronized lyrics, text has to be entered. In the following, editing synchronized lyrics is explained. A file having an ID3v2 tag is selected, the lyrics editor is entered using Add and selecting Synchronized Lyrics. For an existing Synchronized Lyrics frame, it is selected and Edit is clicked. The player is automatically opened with the current file so that the file can be played and paused to synchronize lyrics. The settings at the top of the SYLT editor normally do not have to be changed. If the lyrics contains characters which are not present in the Latin 1 character set, changing the text encoding to UTF16 (or UTF8 for ID3v2.4.0) is advisable. For English lyrics and maximum compatibility, ISO-8859-1 should be used. The Lyrics section has five buttons at the top. Add will add a new time event in the table. The time is taken from the position of the player, thus adding an entry while playing the track will add a line for the currently played position. The events in the table have to be chronologically ordered, therefore the row will be inserted accordingly. Entries with an invalid time are treated specially: If the currently selected row has an invalid time, its time stamp will be replaced by the current time instead of adding a new row. If the current time is not invalid, the first row with an invalid time will be used if present. This behavior should facilitate adding time stamps if the lyrics text is already in the table but the time stamps are missing (which is the case when importing unsynchronized lyrics). Note that the invalid time is represented as 00:00.00, i.e. the same as the time at the absolute beginning of the track, which is not invalid. To make a time invalid, press the Delete key, or use Clear from the context menu. New rows inserted using Insert row from the context menu or created when importing unsynchronized lyrics with From Clipboard or Import also contain invalid time stamps. Rows in the table can be deleted by clicking the Delete button or using Delete rows from the context menu. Synchronized lyrics can be imported from a file using Import. The expected format is simple or enhanced LRC. If the selected file does not contain a square bracket in the first line, it is supposed to be a simple text file with unsynchronized lyrics. The lines from such a file are then imported having invalid time stamps. The time information can be added using the Add button or by manual entry. It is also possible to import lyrics via copy-paste using From Clipboard. Synchronized lyrics can be written to LRC files using Export. Note that only entries with valid time stamps will be exported and that the entries will be sorted by time. Entries with invalid time won't be stored in the SYLT frame either, so make sure to include all timing information before leaving the dialog. The ID3 specification[6] suggests a time stamp for each syllable. However most players only support the granularity of a line or sentence. To support both use cases, Kid3 follows the same conventions as SYLT Editor[7]. Text which is entered into the table is assumed to start a new line unless it starts with a space or a hyphen. Exceptions to this rule are possible by starting a line with an underscore ('_') to force continuation or a hash mark ('#') to force a new line. These escape characters are not stored inside the SYLT frame. Inside the SYLT frame, new lines start with a line feed character (hex 0A) whereas continuations do not. When reading SYLT frames, Kid3 checks if the first entry starts with a line feed. If this is not the case, it is assumed that all entries are new lines and that no syllable continuations are used. While the track is played, the row associated with the current playing position is highlighted, so that the correctness of the synchronization information can be verified. If an offset has to be added to one or more time stamps, this can be accomplished with the Add offset context menu. Negative values can be used to reduce the time. Using Seek to position in the context menu, it is possible to set the playing position to the time of the selected row. Recommended procedure to add new synchronized lyrics The File Menu¶File → Open... (Ctrl+O) File → Open Recent File → Open Directory... (Ctrl+D) File → Reload (F5) File → Save (Ctrl+S) File → Revert File → Import... Import from a freedb.org server is possible using a dialog which appears when From Server: gnudb.org or TrackType.org is selected. The artist and album name to search for can be entered in the two topmost fields, the albums which match the query will be displayed when Find is clicked and the results from[8] are received. Importing the track data for an album is done by double-clicking the album in the list. The freedb.org server to import from can be selected as well as the CGI path. The imported data is displayed in the preview table of the import dialog. When satisfied with the displayed tracks, they can be imported by terminating the import dialog with OK. A search on the Discogs server can be performed using Discogs. As in the gnudb.org dialog, you can enter artist and album and then choose from a list of releases. If Standard Tags is marked, the standard information is imported, e.g. artist, album, and title. If Additional Tags is marked, more information is imported if available, e.g. performers, arrangers, or the publisher. If Cover Art is marked, cover art will be downloaded if available. A search on Amazon can be performed using Amazon. As in the gnudb.org dialog, you can enter artist and album and then choose from a list of releases. If Additional Tags is marked, more information is imported if available, e.g. performers, arrangers, or the publisher. If Cover Art is marked, cover art will be downloaded if available. You can search in the same way in the release database of MusicBrainz using From MusicBrainz Release. The workflow is the same as described for From gnudb.org. Import from a MusicBrainz server is possible using the dialog which appears when From MusicBrainz Fingerprint is selected. The Server can be selected as in the freedb import dialog. Below is a table displaying the imported track data. The right column shows the state of the MusicBrainz query, which starts with "Pending" when the dialog is opened. Then the fingerprint is looked up and if it does not yield a result, another lookup using the tags in the file is tried. Thus it can be helpful for a successful MusicBrainz query to store known information (e.g. artist and album) in the tags before the import. If a result was found, the search ends in the state "Recognized", otherwise nothing was found or multiple ambiguous results and one of them has to be selected by the user. OK and Apply use the imported data, Cancel closes the dialog. The closing can take a while since the whole MusicBrainz machinery has to be shut down. For the import of textual data, From File/Clipboard opens a subdialog, where several preconfigured import formats are available. The first two, "CSV unquoted" and "CSV quoted" can be used to import data which was exported by the Export dialog. The CSV data can be edited with a spreadsheet, and shall be written using tabs as delimiters. Import should then be possible using "CSV quoted", which is more flexible than "CSV unquoted". However, its fields cannot contain any double quotes. If you only export from Kid3 and import later, "CSV unquoted" can be used as a simple format for this purpose. Note that there are also "Export CSV" and "Import CSV" commands in the context menu of the file list, which use scripts to export and import CSV data in a more complete, powerful and flexible way. The next format, "freedb HTML text", can be used to copy information from an HTML page of freedb.org[9]. Search an album in freedb and if the desired information is displayed in the web browser, copy the contents to the clipboard. Then click the From Clipboard button and the imported tracks will be displayed in the preview table at the top of the dialog. If you are satisfied with the imported data, terminate the dialog with OK, which will insert the data into the tags of the current directory. The destination (Tag 1, Tag 2 or Tag 1 and Tag 2) can be selected with a combo box. The files in the current directory should be in the correct track order to get their tags assigned. This is the case if they are numbered. The next preconfigured import format, "freedb HTML source", can be used, if the data is available as an HTML document. Import is possible using the From File button, which opens a file selector, or copying its contents from an editor and then importing from clipboard. This format can be useful for offline import, although the HTML document could also be opened in a browser and then be imported in the first format via the clipboard. More preconfigured formats, e.g. "Track Title Time", are available. An empty custom format can be created with Add to be set by the user. Two lines below the format name can be set with a regular expression to capture the fields from the import text. The first regular expression will be parsed once per document to gather per-album data such as artist, album, year and genre. The second line is tried to match from the start of the document to the end to get track data, usually number and title. The regular expressions include all the features offered by Qt, which is most of the what Perl offers. Bracketing constructs "(..)" create capture buffers for the fields to import and are preceded by Kid3 specific codes to specify which field to capture. The codes are the same as used for the filename format, besides the codes listed below, any frame name is possible: For example, a track regular expression (second line) to import from an .m3u playlist could be "%{track}(\d+)\s+%{title}(\S[^\r\n]*)\.mp3[\r\n]". All formats can be changed by editing the regular expressions and the name and then clicking Save Settings. They will be stored in the kid3rc file in the configuration directory. This file can be directly edited to have more import formats or it can be deleted to revert to the default formats. Formats can be deleted using Remove. Accuracy shows an estimation of how good the imported information matches the given tracks. It uses track durations or file names to calculate the level of similarity in percent. Cover Art shows the URL of the album cover image which will be downloaded. To check whether the imported tracks match the current set of files, the duration of the imported tracks can be compared with the duration of the files. This option can be enabled with the checkbox Check maximum allowable time difference and the maximum tolerated difference in time can be set in seconds. If a mismatch in a length is detected, the length is displayed with a red background in the preview table. If the files are ordered differently than the imported tracks, their assigned tracks have to be changed. This task can be facilitated using the Match with buttons Length, Track, and Title, which will reorder the tracks according to the corresponding field. To correct the assignments manually, a track can be dragged with the left mouse button and the Ctrl key hold down, and then dropped at the new location. When the import dialog is opened, it contains the actual contents of the tags. The tag type (Tag 1, Tag 2, Tag 1 and Tag 2) can be selected using the Destination combo box. The button on the right of this combo box can be used to revert the table to the current contents of the tags. The checkboxes in the first table column can be used to select the tracks which are imported. This can be useful if a folder contains the tracks of both CDs of a double CD and only the tracks of the second CD have to be imported. To identify the tracks which are imported, it is possible to display the file names or the full paths to the files using the context menu of the table header. The values in the import table can be edited. The revert-button to the right of the Destination combo box can be used to restore the contents of the tags, which can also be useful after changing the Destination. Almost all dialogs feature a Save Settings button, which can be used to store the dialog specific settings and the window size persistently. From Tags leads to a subdialog to set tag frames from the contents of other tag frames. This can be used to simply copy information between tags or extract a part from one frame and insert it in another. As in the import from file/clipboard dialog, there are freely configurable formats to perform different operations. Already preconfigured are formats to copy the Album value to Album Artist, Composer or Conductor, and to extract the Track Number from Title fields which contain a number. There is also a format to extract a Subtitle from a Title field. The following example explains how to add a custom format, which sets the information from the Subtitle field also in the Comment field. Create a new format using Add and set a new name, e.g. "Subtitle to Comment". Then enter "%{subtitle}" in Source and "%{comment}(.*)" for Extraction and click Save Settings. The expression in Source can contain format codes for arbitrary tag frames, multiple codes can be used to combine the contents from different frames. For each track, a text is generated from its tags using the Source format, and the regular expression from Extraction is applied to this text to set new values for the tags. Format codes are used before the capturing parentheses to specify the tag frame where the captured text shall be stored. It works in the same way as for the import from file/clipboard. Import from Tags... is also directly available from the File menu. The difference between these two functions is that the import dialog subdialog operates on all files of the current directory whereas the menu function operates on the selected files (which can be in different directories). The menu function supports an additional code "%{__return}" to return the extracted value, which can be useful with the CLI and QML interfaces. File → Import from gnudb.org... File → Import from TrackType.org... File → Import from Discogs... File → Import from Amazon... File → Import from MusicBrainz Release... File → Import from MusicBrainz Fingerprint... File → Import from Tags... File → Automatic Import... The tag type (Tag 1, Tag 2, Tag 1 and Tag 2) can be selected using the Destination combo box. Profiles determine which servers will be contacted to fetch album information. Some profiles are predefined (All, MusicBrainz, Discogs, Cover Art), custom profiles can be added using the Add button at the right of the Profile combo box. The table below shows the servers which will be used when importing album information using the selected profile. The import process for an album is finished if all required information has been found, so the order of the rows in the table is important. It can be changed using the Move Up and Move Down buttons. Edit can be used to change an existing entry. The Server selection offers the same servers as can be used in the import functions. Standard Tags, Additional Tags, Cover Art determine the information which shall be fetched from the server. Finally, Accuracy is the minimum accuracy which must be achieved to accept the imported data. If the accuracy is insufficient, the next server in the list will be tried. The same dialog containing the server properties appears when Add is clicked to add a new server entry. Existing entries can be deleted using Remove. To launch an automatic batch import with the selected profile, click Start. Details about the running import are displayed at the top of the dialog. The process can be aborted with the Abort button. File → Browse Cover Art... Because not all browsers support drag and drop of images and the pictures on websites often have a URL, in such cases Kid3 will receive the URL and not the picture. If the URL points to a picture, it will be downloaded. However, if the URL refers to some other web resource, it has to be translated to the corresponding picture. Such mappings are defined in the table URL extraction. The left column Match contains a regular expression which is compared with the URL. If it matches, the captured expressions in parentheses are inserted into the pattern of the right Picture URL column (at the positions marked with \1 etc.). The replaced regular expression contains the URL of the picture. By this means cover art can be imported from Amazon, Google Images, etc. using drag and drop. It is also possible to define your own mappings. File → Export... The format settings are similar as in the Import dialog: The topmost field contains the title (e.g. "CSV unquoted"), followed by the header, which will be generated at the begin of the file. The track data follows; it is used for every track. Finally, the trailer can be used to generate some finishing text. The format fields do not contain regular expressions as in the Import dialog, but only output format expressions with special %-expressions, which will be replaced by values from the tags. The whole thing works like the file name format, and the same codes are used plus some additional codes. Not only the codes listed below but all tag frame names can be used. A few formats are predefined. "CSV unquoted" separates the fields by tabs. Data in this format can be imported again into Kid3 using the import format with the same name. "CSV quoted" additionally encloses the fields by double quotes, which eases the import into spreadsheet applications. However, the fields shall not contain any double quotes when this format is used. "Extended M3U" and "Extended PLS" generate playlists with extended attributes and absolute path names. "HTML" can be used to generate an HTML page with hyperlinks to the tracks. "Kover XML" creates a file which can be imported by the cover printing program Kover. "Technical Details" provides information about bit rate, sample rate, channels, etc. Finally, "Custom Format" is left empty for definition of a custom format. You can define more formats of your own by adding lines in the file kid3rc in the configuration directory. The other formats can be adapted to your needs. The source of the tags to generate the export data (Tag 1 or Tag 2) can be selected with a combo box. Pushing To File or To Clipboard stores the data in a file or on the clipboard. OK and Cancel close the dialog, whereas OK accepts the current dialog settings. File → Create Playlist The name of the playlist can be the Same as directory name or use a Format with values from the tags, e.g. "%{artist} - %{album}" to have the artist and album name in the playlist file name. The format codes are the same as for Export. Create new empty playlist will make an empty playlist with the given name. The extension depends on the playlist format. The location of the generated playlist is determined by the selection of the Create in combo box. Current directory Every directory Top-level directory The Format of the playlist can be M3U, PLS or XSPF. If Include only the selected files is checked, only the selected files will be included in the playlist. If a directory is selected, all of its files are selected. If this check box is not activated, all audio files are included in the playlist. Sort by file name selects the usual case where the files are ordered by file name. With Sort by tag field, it is possible to sort by a format string with values from tag fields. For instance, "%{track.3}" can be used to sort by track number (the ".3" is used to get three digits with leading zeros because strings are used for sorting). It is also possible to use multiple fields, e.g. "%{genre}%{year}" to sort using a string composed of genre and year. The playlist entries will have relative or absolute file paths depending on whether Use relative path for files in playlist or Use full path for files in playlist is set. When Write only list of files is set, the playlist will only contain the paths to the files. To generate an extended playlist with additional information, a format string can be set using the Write info using control. File → Quit (Ctrl+Q) The Edit Menu¶Edit → Select All (Alt+A) Edit → Deselect (Ctrl+Shift+A) Edit → Select All in Directory Edit → Previous File (Alt+Up) Edit → Next File (Alt+Down) Edit → Find... (Ctrl+F) Edit → Replace... (Ctrl+R) Depending on the number of files, the search might take some time, therefore it can be aborted by closing the dialog. The Tools Menu¶Tools → Apply Filename Format Tools → Apply Tag Format Tools → Apply Text Encoding Tools → Rename Directory... If a directory separator "/" is found in the format, multiple directories are created. If you want to create a new directory instead of renaming the current directory, select Create Directory instead of Rename Directory. The source of the tag information can be chosen between From Tag 1 and Tag 2, From Tag 1 and From Tag 2. A preview for the rename operation performed on the first file can be seen in the From and To sections of the dialog. Multiple directories can be renamed by selecting them. Tools → Number Tracks... When Total number of tracks is checked, the number of tracks will also be set in the tags. It is possible to number the tracks over multiple directories. The folders have to be expanded and selected. If Reset counter for each directory is checked, track numbering is restarted with the given number for each directory when multiple folders are selected. The number tracks dialog can also be used to format existing track numbers without changing the values when the check box left to Start number is deactivated. The total number of tracks will be added if the corresponding check box is active, which can be used to set the total for all selected tracks. If only formatting of the existing numbers is desired, this check box has to be deactivated too. Tools → Filter... These codes are replaced with the values for the file, and the resulting strings can be compared with the following operations: True expressions are replaced by 1, false by 0. True values are represented by 1, true, on and yes, false values by 0, false, off and no. Boolean operations are not, and, or (in this order of precedence) and can be grouped by parentheses. Some filter rules are predefined and can serve as examples for your own expressions: All Filename Tag Mismatch Tests if the file path conforms with the file name format. This rule is automatically adapted if the file name format changes. No Tag 1 Displays only files which do not have a tag 1. No Tag 2 Displays only files which do not have a tag 2. ID3v2.3.0 Tag Displays only files which have an ID3v2.3.0 tag. ID3v2.4.0 Tag Displays only files which have an ID3v2.4.0 tag. Tag 1 != Tag 2 Displays files with differences between tag 1 and tag2. Tag 1 == Tag 2 Displays files with identical tag 1 and tag 2. Incomplete Displays files with empty values in the standard tags (title, artist, album, date, track number, genre). No Picture Displays only files which do not have a picture. Marked Displays only files which are marked because they violate the ID3 standard, are truncated or the picture is too large. Custom Filter %{artist} matches "The.*" Then click Save Settings. Click Apply to filter the files. All files processed are displayed in the text view, with a "+" for those who match the filter and a "-" for the others. When finished, only the files with an artist starting with "The" are displayed, and the window title is marked with "[filtered]". Tools → Convert ID3v2.3 to ID3v2.4 Tools → Convert ID3v2.4 to ID3v2.3 Tools → Play The Settings Menu¶Settings → Show Toolbar Settings → Show Statusbar Settings → Show Picture Settings → Auto Hide Tags Settings → Configure Kid3... Tag specific options can be found on the Tags page, which is itself separated into four tabs for Tag 1, Tag 2, Tag 3, and All Tags. If Mark truncated fields is checked, truncated ID3v1.1 fields will be marked red. The text fields of ID3v1.1 tags can only have 30 characters, the comment only 28 characters. Also the genre and track numbers are restricted, so that fields can be truncated when imported or transferred from ID3v2. Truncated fields and the file will be marked red, and the mark will be removed after the field has been edited. With Text encoding for ID3v1 it is possible to set the character set used in ID3v1 tags. This encoding is supposed to be ISO-8859-1, so it is recommended to keep this default value. However, there are tags around with different encoding, so it can be set here and the ID3v1 tags can then be copied to ID3v2 which supports Unicode. The check box Use track/total number of tracks format controls whether the track number field of ID3v2 tags contains simply the track number or additionally the total number of tracks in the directory. When Genre as text instead of numeric string is checked, all ID3v2 genres will be stored as a text string even if there is a corresponding code for ID3v1 genres. If this option is not set, genres for which an ID3v1 code exists are stored as the number of the genre code (in parentheses for ID3v2.3). Thus the genre Metal is stored as "Metal" or "(9)" depending on this option. Genres which are not in the list of ID3v1 genres are always stored as a text string. The purpose of this option is improved compatibility with devices which do not correctly interpret genre codes. When WAV files with lowercase id3 chunk is checked, the RIFF chunk used to store ID3v2 tags in WAV files will be named "id3 " instead of "ID3 ". By default, Kid3 and other applications using TagLib accept both the lowercase and the uppercase variant when reading WAV files, but they use "ID3 " when writing ID3v2 tags to WAV files. As there exist other applications which only accept "id3 " (e.g. JRiver Media Center and foobar2000), this option can be used to create tags which can be read by such applications. When Mark standard violations is checked, ID3v2 fields which violate the standard will be marked red. Details about the violation are shown in a tooltip: The ID3 standard documents are available online: Text encoding defines the default encoding used for ID3v2 frames and can be set to ISO-8859-1, UTF16, or UTF8. UTF8 is not valid for ID3v2.3.0 frames; if it is set, UTF16 will be used instead. For ID3v2.4.0 frames, all three encodings are possible. Version used for new tags determines whether new ID3v2 tags are created as version 2.3.0 or 2.4.0. Track number digits is the number of digits in Track Number fields. Leading zeros are used to pad. For instance, with a value of 2 the track number 5 is set as "05". The combo box Comment field name is only relevant for Ogg/Vorbis and FLAC files and sets the name of the field used for comments. Different applications seem to use different names, "COMMENT" for instance is used by xmms, whereas amaroK uses "DESCRIPTION". The format of pictures in Ogg/Vorbis files is determined by Picture field name, which can be METADATA_BLOCK_PICTURE or COVERART. The first is the official standard and uses the same format as pictures in FLAC tags. COVERART is an earlier unofficial way to include pictures in Vorbis comments. It can be used for compatibility with legacy players. If the Mark if larger than check box is activated, files containing embedded album cover art exceeding the given size in bytes are marked red. This can be used to find files containing oversized pictures which are not accepted by some applications and players. The default value is 131072 bytes (128 KB). Custom Genres can be used to define genres which are not available in the standard genre list, e.g. "Gothic Metal". Such custom genres will appear in the Genre combo box of Tag 2. For ID3v1.1 tags, only the predefined genres can be used. The list of custom genres can also be used to reduce the number of genres available in the Genre combo box to those typically used. If your collection mostly contains music in the genres Metal, Gothic Metal, Ancient and Hard Rock, you can enter those genres and mark Show only custom genres. The Tag 2 Genre combo box will then only contain those four genres and you will not have to search through the complete genres list for them. In this example, only Metal and Hard Rock will be listed in the tag 1 genres list, because those two custom genres entries are standard genres. If Show only custom genres is not active, the custom genres can be found at the end of the genres list. Quick Access Frames defines which frame types are always shown in the Tag 2 section. Such frames can then be added without first using the Add button. The order of these quick access frames can be changed by dragging and dropping items. The combo box Track number field name is only relevant for RIFF INFO and sets the name of the field used for track numbers. Track numbers are not specified in the original RIFF standard, there are applications which use "ITRK", others use "IPRT". Tag Format contains options for the format of the tags. When Automatically apply format is checked, the format configuration is automatically used while editing text in the line edits. Validation enables validators in the controls with track/total and date/time values. The Case conversion can be set to No changes, All lowercase, All uppercase, First letter uppercase or All first letters uppercase. To use locale-aware conversion between lowercase and uppercase characters, a locale can be selected in the combobox below. The string replacement list can be set to arbitrary string mappings. To add a new mapping, select the From cell of a row and insert the text to replace, then go to the To column and enter the replacement text. To remove a mapping set the From cell to an empty value (e.g. by first typing space and then backspace). Inserting and deleting rows is also possible using a context menu which appears when the right mouse button is clicked. Replacement is only active, if the String replacement checkbox is checked. The table in Rating contains the mapping of star ratings to the effective values stored in the tag. The frames with rating information are listed in the Rating row of the frame list. For these frames, the rating can be set by giving a number of stars out of five stars. Different tag formats and different applications use different values to map the star rating to the value stored in the tag. In order to display the correct number of stars, Kid3 will look up a map in this table. The key to look up the mapping is the frame name, for example "RATING" as used for Vorbis comments or "IRTD" for RIFF INFO. For ID3v2 tags, a combined key is used consisting of the frame ID "POPM" of the Popularimeter frame and its "Email" field, separated by a dot. Therefore, different keys for ID3v2 exist, e.g. "POPM.Windows Media Player 9 Series" for the mapping used by Windows Media Player and Explorer, and simply "POPM" for POPM frames with an empty "Email" field. As multiple entries for "POPM" can exist, their order is important. When Kid3 adds a new Popularimeter frame, it will use the first "POPM" entry to determine the value to be written into the "Email" field. This value will then specify the mapping to be used for star ratings. The first entry is also used if no key was found, it is therefore the default entry. Besides the Name column containing the keys, the table has columns 1 to 5 for the values to be stored when the corresponding number of stars is given. The other way round, the values determine the number of stars which are displayed for the value stored in the frame. For instance, the row in the table below contains the values 1, 64, 128, 196, 255. The thresholds for the number of stars to be displayed lay between these values and are compatible with what the Windows Explorer uses. Table 1. Entry in Rating Table On the page Files the check box Load last-opened files can be marked so that Kid3 will open and select the last selected file when it is started the next time. Preserve file timestamp can be checked to preserve the file modification time stamp. Filename for cover sets the name which is suggested when an embedded image is exported to a file. With Text encoding (Export, Playlist) the encoding used when writing files can be set. The default System can be changed for example if playlists have to be used on a different device. If Mark changes is active, changed fields are marked with a light gray label background. The section File List determines which files are displayed in the file list. A Filter can be used to restrict the items in this list to files with supported extensions. To explicitly specify which directories to display in the file list or exclude certain directories, the options Include folders and Exclude folders can be used. They can contain wildcard expressions, for instance */Music/* to include only the Music folder, or */iTunes/* to exclude the iTunes folder from the file list. If multiple such expressions have to be used, they can be separated by spaces or semicolons. The buttons Filename from tag and Tag from filename in section Format open dialogs to edit the formats which are available in the Format combo boxes (with arrows up and down), which can be found in the File section of the main window. Filename Format contains options for the format of the filenames. The same options as in Tag Format are available. The User Actions page contains a table with the commands which are available in the context menu of the file list. For critical operations such as deleting files, it is advisable to mark Confirm to pop up a confirmation dialog before executing the command. Output can be marked to see the output written by console commands (standard output and standard error). Name is the name displayed in the context menu. Command is the command line to be executed. Arguments can be passed using the following codes: The special code @separator can be set as a command to insert a separator into the user actions context menu. Menu items can be put into a submenu by enclosing them with @beginmenu and @endmenu commands. The name of the submenu is determined by the Name column of the @beginmenu command. To execute QML scripts, @qml is used as a command name. The path to the QML script is passed as a parameter. The provided scripts can be found in the folder %{qmlpath}/script/ (on Linux typically /usr/share/kid3/qml/script/, on Windows qml/script/ inside the installation directory, and on macOS in the app folder kid3.app/Contents/Resources/qml/script/). Custom scripts can be stored in any directory. If the QML code uses GUI components, @qmlview shall be used instead of @qml. Additional parameters are passed to the QML script where they will be available via the getArguments() function. An overview of some functions and properties which are available in QML can be found in the appendix QML Interface. The command which will be inserted with %{browser} can be defined in the Web browser line edit above. Commands starting with %{browser} can be used to fetch information about the audio files from the web, for instance %{browser}{artist}:%u{title} will query the lyrics for the current song in LyricWiki[12]. The "u" in %u{artist} and %u{title} is used to URL-encode the artist %{artist} and song %{title} information. It is easy to define your own queries in the same way, e.g. an image search with Google[13]: To add album cover art to tag 2, you can search for images with Google or Amazon using the commands described above. The picture can be added to the tag with drag and drop. You can also add an image with Add, then select the Picture frame and import an image file or paste from the clipboard. Picture frames are supported for ID3v2, MP4, FLAC, Ogg and ASF tags. To add and delete entries in the table, a context menu can be used. The Network page contains only a field to insert the proxy address and optionally the port, separated by a colon. The proxy will be used when importing from an Internet server when the checkbox is checked. In the Plugins page, available plugins can be enabled or disabled. The plugins are separated into two sections. The Metadata Plugins & Priority list contains plugins which support audio file formats. The order of the plugins is important because they are tried from top to bottom. Some formats are supported by multiple plugins, so files will be opened with the first plugin supporting them. The TaglibMetadata supports most formats, if it is at the top of the list, it will open most of the files. If you want to use a different plugin for a file format, make sure that it is listed before the TaglibMetadata plugin. Details about the metadata plugin and why you may want to use them instead of TagLib are listed below. The Available Plugins section lists the remaining plugins. Their order is not important, but they can be enabled or disabled using the check boxes. Plugins which are disabled will not be loaded. This can be used to optimize resource usage and startup time. The settings on this page take only effect after a restart of Kid3. Settings → Configure Shortcuts... The Help Menu¶Help → Kid3 Handbook Help → About Kid3 KID3-CLI¶ Commands¶kid3-cli offers a command-line-interface for Kid3. If a directory path is used, the directory is opened. If one or more file paths are given, the common directory is opened and the files are selected. Subsequent commands will then work on these files. Commands are specified using -c options. If multiple commands are passed, they are executed in the given order. If files are modified by the commands, they will be saved at the end. If no command options are passed, kid3-cli starts in interactive mode. Commands can be entered and will operate on the current selection. The following sections list all available commands. help [COMMAND-NAME] Displays help about the parameters of COMMAND-NAME or about all commands if no command name is given. Timeout timeout [default | off | TIME] Overwrite the default command timeout. The CLI commands abort after a command specific timeout is expired. This timeout is 10 seconds for ls and albumart, 60 seconds for autoimport and filter, and 3 seconds for all other commands. If a huge number of files has to be processed, this timeouts may be too restrictive, thus the timeout for all commands can be set to TIME ms, switched off altogether or be left at the default values. Quit application exit [force] Exit application. If there are modified unsaved files, the force parameter is required. Change directory cd [DIRECTORY] If no DIRECTORY is given, change to the home directory. If a directory is given, change into the directory. If one or more file paths are given, change to their common directory and select the files. Print the current working directory pwd Print the filename of the current working directory. Directory list ls List the contents of the current directory. This corresponds to the file list in the Kid3 GUI. Five characters before the file names show the state of the file. kid3-cli> ls 1-- 01 Intro.mp3 > 12- 02 We Only Got This One.mp3 *1-- 03 Outro.mp3 In this example, all files have a tag 1, the second file also has a tag 2 and it is selected. The third file is modified. Save the changed files save Select file select [all | none | first | previous | next | FILE...] To select all files, enter select all, to deselect all files, enter select none. To traverse the files in the current directory start with select first, then go forward using select next or backward using select previous. Specific files can be added to the current selection by giving their file names. Wildcards are possible, so select *.mp3 will select all MP3 files in the current directory. kid3-cli> select first kid3-cli> ls > 1-- 01 Intro.mp3 12- 02 We Only Got This One.mp3 *1-- 03 Outro.mp3 kid3-cli> select next kid3-cli> ls 1-- 01 Intro.mp3 > 12- 02 We Only Got This One.mp3 *1-- 03 Outro.mp3 kid3-cli> select *.mp3 kid3-cli> ls > 1-- 01 Intro.mp3 > 12- 02 We Only Got This One.mp3 >*1-- 03 Outro.mp3 Select tag tag [TAG-NUMBERS] Many commands have an optional TAG-NUMBERS parameter, which specifies whether the command operates on tag 1, 2, or 3. If this parameter is omitted, the default tag numbers are used, which can be set by this command. At startup, it is set to 12 which means that information is read from tag 2 if available, else from tag 1; modifications are done on tag 2. The TAG-NUMBERS can be set to 1, 2, or 3 to operate only on the corresponding tag. If the parameter is omitted, the current setting is displayed. Get tag frame get [all | FRAME-NAME] [TAG-NUMBERS] This command can be used to read the value of a specific tag frame or get information about all tag frames (if the argument is omitted or all is used). Modified frames are marked with a '*'. kid3-cli> get File: MPEG 1 Layer 3 192 kbps 44100 Hz Joint Stereo Name: 01 Intro.mp3 Tag 1: ID3v1.1 Title Intro Artist One Hit Wonder Album Let's Tag Date 2013 Track Number 1 Genre Pop kid3-cli> get title Intro To save the contents of a picture frame to a file, use get picture:'/path/to/folder.jpg' To save synchronized lyrics to an LRC file, use get SYLT:'/path/to/lyrics.lrc' It is possible to get only a specific field from a frame, for example get POPM.Email for the Email field of a Popularimeter frame. If a file has multiple frames of the same kind, the different frames can be indexed with brackets, for example the first performer from a Vorbis comment can be retrieved using get performer[0], the second using get performer[1]. The pseudo field name "selected" can be used to check if a frame is selected, for example get artist.selected will return 1 if the artist frame is selected, else 0. Set tag frame set {FRAME-NAME} {FRAME-VALUE} [TAG-NUMBERS] This command sets the value of a specific tag frame. If FRAME-VALUE is empty, the frame is deleted. kid3-cli> set remixer 'O.H. Wonder' To set the contents of a picture frame from a file, use set picture:'/path/to/folder.jpg' 'Picture Description' To set synchronized lyrics from an LRC file, use set SYLT:'/path/to/lyrics.lrc' 'Lyrics Description' To set a specific field of a frame, the field name can be given after a dot, e.g. to set the Counter field of a Popularimeter frame, use set POPM.Counter 5 An application for field specifications is the case where you want a custom TXXX frame with "rating" description instead of a standard Popularimeter frame (this seems to be used by some plugins). You can create such a TXXX rating frame with kid3-cli, however, you have to first create a TXXX frame with description "rating" and then set the value of this frame to the rating value. kid3-cli> set rating "" kid3-cli> set TXXX.Description rating kid3-cli> set rating 5 The first command will delete an existing POPM frame, because if such a frame exists, set rating 5 would set the POPM frame and not the TXXX frame. Another possibility would be to use set TXXX.Text 5, but this would only work if there is no other TXXX frame present. To set multiple frames of the same kind, an index can be given in brackets, e.g. to set multiple performers in a Vorbis comment, use kid3-cli> set performer[0] 'Liza don Getti (soprano)' kid3-cli> set performer[1] 'Joe Barr (piano)' To select certain frames before a copy, paste or remove action, the pseudo field name "selected" can be used. Normally, all frames are selected, to deselect all, use set '*.selected' 0, then for example set artist.selected 1 to select the artist frame. Revert revert Revert all modifications in the selected files (or all files if no files are selected). Import from file import {FILE} {FORMAT-NAME} [TAG-NUMBERS] Tags are imported from the file FILE in the format with the name FORMAT-NAME (e.g. "CSV unquoted", see Import). If tags is given for FILE, tags are imported from other tags. Instead of FORMAT-NAME parameters SOURCE and EXTRACTION are required, see Import from Tags. To apply the import from tags on the selected files, use tagsel instead of tags. This function also supports output of the extracted value by using an EXTRACTION with the value %{__return}(.+). Automatic import autoimport [PROFILE-NAME] [TAG-NUMBERS] Batch import using profile PROFILE-NAME (see Automatic Import, "All" is used if omitted). Download album cover artwork albumart {URL} [all] Set the album artwork by downloading a picture from URL. The rules defined in the Browse Cover Art dialog are used to transform general URLs (e.g. from Amazon) to a picture URL. To set the album cover from a local picture file, use the set command. kid3-cli> albumart Export to file export {FILE} {FORMAT-NAME} [TAG-NUMBERS] Tags are exported to file FILE in the format with the name FORMAT-NAME (e.g. "CSV unquoted", see Export). Create playlist playlist Create playlist in the format set in the configuration, see Create Playlist. Apply filename format filenameformat Apply file name format set in the configuration, see Apply Filename Format. Apply tag format tagformat Apply tag name format set in the configuration, see Apply Tag Format. Apply text encoding textencoding Apply text encoding set in the configuration, see Apply Text Encoding. Rename directory renamedir [FORMAT] [create | rename | dryrun] [TAG-NUMBERS] Rename or create directories from the values in the tags according to a given FORMAT (e.g. %{artist} - %{album}, see Rename Directory), if no format is given, the format defined in the Rename directory dialog is used. The default mode is rename; to create directories, create must be given explicitly. The rename actions will be performed immediately, to just see what would be done, use the dryrun option. Number tracks numbertracks [TRACK-NUMBER] [TAG-NUMBERS] Number the selected tracks starting with TRACK-NUMBER (1 if omitted). Filter filter [FILTER-NAME | FILTER-FORMAT] Filter the files so that only the files matching the FILTER-FORMAT are visible. The name of a predefined filter expression (e.g. "Filename Tag Mismatch") can be used instead of a filter expression, see Filter. kid3-cli> filter '%{title} contains "tro"' Started /home/urs/One Hit Wonder - Let's Tag + 01 Intro.mp3 - 02 We Only Got This One.mp3 + 03 Outro.mp3 Finished kid3-cli> ls 1-- 01 Intro.mp3 1-- 03 Outro.mp3 kid3-cli> filter All Started /home/urs/One Hit Wonder - Let's Tag + 01 Intro.mp3 + 02 We Only Got This One.mp3 + 03 Outro.mp3 Finished kid3-cli> ls 1-- 01 Intro.mp3 12- 02 We Only Got This One.mp3 1-- 03 Outro.mp3 Convert ID3v2.3 to ID3v2.4 to24 Convert ID3v2.4 to ID3v2.3 to23 Filename from tag fromtag [FORMAT] [TAG-NUMBERS] Set the file names of the selected files from values in the tags, for example fromtag '%{track} - %{title}' 1. If no format is specified, the format set in the GUI is used. Tag from filename totag [FORMAT] [TAG-NUMBERS] Set the tag frames from the file names, for example totag '%{albumartist} - %{album}/%{track} %{title}' 2. If no format is specified, the format set in the GUI is used. If the format of the filename does not match this pattern, a few other commonly used formats are tried. Tag to other tag syncto {TAG-NUMBER} Copy the tag frames from one tag to the other tag, e.g. to set the ID3v2 tag from the ID3v1 tag, use syncto 2. Copy copy [TAG-NUMBER] Copy the tag frames of the selected file to the internal copy buffer. They can then be set on another file using the paste command. To copy only a subset of the frames, use the "selected" pseudo field with the set command. For example, to copy only the disc number and copyright frames, use set '*.selected' 0 set discnumber.selected 1 set copyright.selected 1 copy Paste paste [TAG-NUMBER] Set tag frames from the contents of the copy buffer in the selected files. Remove remove [TAG-NUMBER] Remove a tag. It is possible to remove only a subset of the frames by selecting them as described in the copy command. Examples¶Set title containing an apostrophe. Commands passed to kid3-cli with -c have to be in quotes if they do not only consist of a single word. If such a command itself has an argument containing spaces, that argument has to be quoted too. In UNIX shells single or double quotes can be used, but on the Windows Command Prompt, it is important that the outer quoting is done using double quotes and inside these quotes, single quotes are used. If the text inside the single quotes contains a single quote, it has to be escaped using a backslash character, as shown in the following example: kid3-cli -c "set title 'I\'ll be there for you'" /path/to/dir Set album cover in all files of a directory using the batch import function: kid3-cli -c "autoimport 'Cover Art'" /path/to/dir Remove comment frames and apply the tag format in both tags of all MP3 files of a directory: kid3-cli -c "set comment '' 1" -c "set comment '' 2" \ -c "tagformat 1" -c "tagformat 2" /path/to/dir/*.mp3 Automatically import tag 2, synchronize to tag 1, set file names from tag 2 and finally create a playlist: kid3-cli -c autoimport -c "syncto 1" -c fromtag -c playlist \ /path/to/dir/*.mp3 For all files with an ID3v2.4.0 tag, convert to ID3v2.3.0 and remove the arranger frame: kid3-cli -c "filter 'ID3v2.4.0 Tag'" -c "select all" -c to23 \ -c "set arranger ''" /path/to/dir This Python script uses kid3-cli to generate iTunes Sound Check iTunNORM frames from replay gain information. #!/usr/bin/env python3 # Generate iTunes Sound Check from ReplayGain. import os, sys, subprocess def rg2sc(dirpath): for root, dirs, files in os.walk(dirpath): for name in files: if name.endswith(('.mp3', '.m4a', '.aiff', '.aif')): fn = os.path.join(root, name) rg = subprocess.check_output([ 'kid3-cli', '-c', 'get "replaygain_track_gain"', fn]).strip() if rg.endswith(b' dB'): rg = rg[:-3] try: rg = float(rg) except ValueError: print('Value %s of %s in not a float' % (rg, fn)) continue sc = (' ' + ('%08X' % int((10 ** (-rg / 10)) * 1000) )) * 10 subprocess.call([ 'kid3-cli', '-c', 'set iTunNORM "%s"' % sc, fn]) if __name__ == '__main__': rg2sc(sys.argv[1]) JSON Format¶In order to make it easier to parse results from kid3-cli, it is possible to get the output in JSON format. When the request is in JSON format, the response will also be JSON. A compact format of the request will also give a compact representation of the response. If the request contains an "id" field, it is assumed to be a JSON-RPC request and the response will contain a "jsonrpc" field and the "id" of the request. The request format uses the same commands as the standard CLI, the "method" field contains the command and the parameters (if any) are given in the "params" list. The response contains a "result" object, which can also be null if the corresponding kid3-cli command does not return a result. In case of an error, an "error" object is returned with "code" and "message" fields as used in JSON-RPC. kid3-cli> {"method":"set","params":["artist","An Artist"]} {"result":null} kid3-cli> {"method":"get","params":["artist",2]} {"result":"An Artist"} kid3-cli> {"method": "get", "params": ["artist"]} { "result": "An Artist" } kid3-cli> {"jsonrpc":"2.0","id":"123","method":"get","params":["artist"]} {"id":"123","jsonrpc":"2.0","result":"An Artist"} CREDITS AND LICENSE¶Kid3 Program written by Urs Fleisch <ufleisch at users.sourceforge.net> FDL[22] GPL[23] INSTALLATION¶ How to obtain Kid3¶Kid3 can be found at. Requirements¶Kid3 needs Qt[24]. KDE[25] is recommended but not necessary, as Kid3 can also be compiled as a Qt application. Kid3 can be compiled for systems where these libraries are available, e.g. for GNU/Linux, Windows and macOS. To tag Ogg/Vorbis files, libogg[15], libvorbis and libvorbisfile[16] are required, for FLAC files libFLAC++ and libFLAC[17]. id3lib[14] is used for MP3 files. These four formats are also supported by TagLib[18], which can also handle Opus, MPC, APE, MP2, Speex, TrueAudio, WavPack, WMA, WAV, AIFF files and tracker modules. To import from acoustic fingerprints, Chromaprint[20] and libav[21] are used. Kid3 is available for most Linux distributions, Windows and macOS. Links can be found on. Compilation and Installation¶You can compile Kid3 with or without KDE. Without KDE, Kid3 is a simple Qt application and lacks some configuration and session features. For a KDE version, go into the top directory and type % cmake . % make % make install To compile for different versions of Qt or KDE, set the corresponding cmake options. If not all libraries are present, Kid3 is built with reduced functionality. So you should take care to have all desired development packages installed. On the other side, cmake-options control which libraries are compiled in. The default is -DWITH_TAGLIB:BOOL=ON -DWITH_MP4V2:BOOL=OFF -DWITH_ID3LIB:BOOL=ON -DWITH_CHROMAPRINT:BOOL=ON -DWITH_VORBIS:BOOL=ON -DWITH_FLAC:BOOL=ON . These options can be disabled using OFF. To build Kid3 as a Qt application without KDE, use the cmake option -DWITH_APPS=Qt. To build both a KDE and a Qt application, set -DWITH_APPS="Qt;KDE". To use a specific Qt installation, set -DQT_QMAKE_EXECUTABLE=/path/to/qmake. Generation of RPM-Packages is supported by the file kid3.spec, for Debian-Packages, the script build-deb.sh is available. The Qt application can also be compiled for Windows and macOS. The script buildlibs.sh can be used to download and build all required libraries and create a Kid3 package. Configuration¶With KDE, the settings are stored in .config/kid3rc. As a Qt application, this file is in .config/Kid3/Kid3.conf. On Windows, the configuration is stored in the registry. on macOS in a plist file. The environment variable KID3_CONFIG_FILE can be used to set the path of the configuration file. D-BUS INTERFACE¶ D-Bus Examples¶On Linux a D-Bus-interface can be used to control Kid3 by scripts. Scripts can be written in any language with D-Bus-bindings (e.g. in Python) and can be added to the User Actions to extend the functionality of Kid3. The artist in tag 2 of the current file can be set to the value "One Hit Wonder" with the following code: Shell dbus-send --dest=net.sourceforge.kid3 --print-reply=literal \ /Kid3 net.sourceforge.Kid3.setFrame int32:2 string:'Artist' \ string:'One Hit Wonder' or easier with Qt's qdbus (qdbusviewer can be used to explore the interface in a GUI): qdbus net.sourceforge.kid3 /Kid3 setFrame 2 Artist \ 'One Hit Wonder' Python import dbus kid3 = dbus.SessionBus().get_object( 'net.sourceforge.kid3', '/Kid3') kid3.setFrame(2, 'Artist', 'One Hit Wonder') Perl use Net::DBus; $kid3 = Net::DBus->session->get_service( "net.sourceforge.kid3")->get_object( "/Kid3", "net.sourceforge.Kid3"); $kid3->setFrame(2, "Artist", "One Hit Wonder"); D-Bus API¶The D-Bus API is specified in net.sourceforge.Kid3.xml. The Kid3 interface has the following methods: Open file or directory boolean openDirectory(string path); path Returns true if OK. Unload the tags of all files which are not modified or selected unloadAllTags(void); Save all modified files boolean save(void); Returns true if OK. Get a detailed error message provided by some methods string getErrorMessage(void); Returns detailed error message. Revert changes in the selected files revert(void); Start an automatic batch import boolean batchImport(int32 tagMask, string profileName); tagMask profileName Import tags from a file boolean importFromFile(int32 tagMask, string path, int32 fmtIdx); tagMask path fmtIdx Returns true if OK. Import tags from other tags importFromTags(int32 tagMask, string source, string extraction); tagMask source extraction Import tags from other tags on selected files array importFromTagsToSelection(int32 tagMask, string source, string extraction); tagMask source extraction returnValues Download album cover art downloadAlbumArt(string url, boolean allFilesInDir); url allFilesInDir Export tags to a file boolean exportToFile(int32 tagMask, string path, int32 fmtIdx); tagMask path fmtIdx Returns true if OK. Create a playlist boolean createPlaylist(void); Returns true if OK. Get items of a playlist array getPlaylistItems(string path); path Returns list of absolute paths to playlist items. Set items of a playlist boolean setPlaylistItems(string path, array items); path items Returns true if OK, false if not all items were found and added or saving failed. Quit the application quit(void); Select all files selectAll(void); Deselect all files deselectAll(void); Set the first file as the current file boolean firstFile(void); Returns true if there is a first file. Set the previous file as the current file boolean previousFile(void); Returns true if there is a previous file. Set the next file as the current file boolean nextFile(void); Returns true if there is a next file. Select the first file boolean selectFirstFile(void); Returns true if there is a first file. Select the previous file boolean selectPreviousFile(void); Returns true if there is a previous file. Select the next file boolean selectNextFile(void); Returns true if there is a next file. Select the current file boolean selectCurrentFile(void); Returns true if there is a current file. Expand or collapse the current file item if it is a directory boolean expandDirectory(void); A file list item is a directory if getFileName() returns a name with '/' as the last character. Returns true if current file item is a directory. Apply the file name format applyFilenameFormat(void); Apply the tag format applyTagFormat(void); Apply text encoding applyTextEncoding(void); Set the directory name from the tags boolean setDirNameFromTag(int32 tagMask, string format, boolean create); tagMask format create Returns true if OK, else the error message is available using getErrorMessage(). Set subsequent track numbers in the selected files numberTracks(int32 tagMask, int32 firstTrackNr); tagMask firstTrackNr Filter the files filter(string expression); expression Convert ID3v2.3 tags to ID3v2.4 convertToId3v24(void); Convert ID3v2.4 tags to ID3v2.3 convertToId3v23(void); Returns true if OK. Get path of directory string getDirectoryName(void); Returns absolute path of directory. Get name of current file string getFileName(void); Returns true absolute file name, ends with "/" if it is a directory. Set name of selected file setFileName(string name); name The file will be renamed when the directory is saved. Set format to use when setting the filename from the tags setFileNameFormat(string format); format Set the file names of the selected files from the tags setFileNameFromTag(int32 tagMask); tagMask Get value of frame string getFrame(int32 tagMask, string name); tagMask name To get binary data like a picture, the name of a file to write can be added after the name, e.g. "Picture:/path/to/file". In the same way, synchronized lyrics can be exported, e.g. "SYLT:/path/to/file". Returns value of frame. Set value of frame boolean setFrame(int32 tagMask, string name, string value); tagMask name value For tag 2 (tagMask 2), if no frame with name exists, a new frame is added, if value is empty, the frame is deleted. To add binary data like a picture, a file can be added after the name, e.g. "Picture:/path/to/file". "SYLT:/path/to/file" can be used to import synchronized lyrics. Returns true if OK. Get all frames of a tag array of string getTag(int32 tagMask); tagMask Returns list with alternating frame names and values. Get technical information about file array of string getInformation(void); Properties are Format, Bitrate, Samplerate, Channels, Duration, Channel Mode, VBR, Tag 1, Tag 2. Properties which are not available are omitted. Returns list with alternating property names and values. Set tag from file name setTagFromFileName(int32 tagMask); tagMask Set tag from other tag setTagFromOtherTag(int32 tagMask); tagMask Copy tag copyTag(int32 tagMask); tagMask Paste tag pasteTag(int32 tagMask); tagMask Remove tag removeTag(int32 tagMask); tagMask Reparse the configuration reparseConfiguration(void); Automated configuration changes are possible by modifying the configuration file and then reparsing the configuration. Plays the selected files playAudio(void); QML INTERFACE¶ QML Examples¶QML scripts can be invoked via the context menu of the file list and can be set in the tab User Actions of the settings dialog. The scripts which are set there can be used as examples to program custom scripts. QML uses JavaScript, here is the obligatory "Hello World": import Kid3 1.0 Kid3Script { onRun: { console.log("Hello world, directory is", app.dirName) Qt.quit() } } If this script is saved as /path/to/Example.qml, the user command can be defined as @qml /path/to/Example.qml with name QML Test and Output checked. It can then be started using the QML Test item in the file list context menu, and the output will be visible in the window. Alternatively, the script could also be started independent of Kid3 using the QML tools. qml -apptype widget -I /usr/lib/kid3/plugins/imports /path/to/Example.qml or qmlscene -I /usr/lib/kid3/plugins/imports /path/to/Example.qml On Windows and macOS, the import path must be adapted to the imports folder inside the installation directory. Scripts started outside of Kid3 will use the current directory, so it should be changed beforehand. To list the titles in the tags 2 of all files in the current directory, the following script could be used: import Kid3 1.0 Kid3Script { onRun: { app.firstFile() do { if (app.selectionInfo.tag(Frame.Tag_2).tagFormat) { console.log(app.getFrame(tagv2, "title")) } } while (app.nextFile()) } } If the directory contains many files, such a script might block the user interface for some time. For longer operations, it should therefore have a break from time to time. The alternative implementation below has the work for a single file moved out into a function. This function invokes itself with a timeout of 1 ms at the end, given that more files have to be processed. This will ensure that the GUI remains responsive while the script is running. import Kid3 1.0 Kid3Script { onRun: { function doWork() { if (app.selectionInfo.tag(Frame.Tag_2).tagFormat) { console.log(app.getFrame(tagv2, "title")) } if (!app.nextFile()) { Qt.quit() } else { setTimeout(doWork, 1) } } app.firstFile() doWork() } } When using app.firstFile() with app.nextFile(), all files of the current directory will be processed. If only the selected files shall be affected, use firstFile() and nextFile() instead, these are convenience functions of the Kid3Script component. The following example is a script which copies only the disc number and copyright frames of the selected file. import Kid3 1.1 Kid3Script { onRun: { function doWork() { if (app.selectionInfo.tag(Frame.Tag_2).tagFormat) { app.setFrame(tagv2, "*.selected", false) app.setFrame(tagv2, "discnumber.selected", true) app.setFrame(tagv2, "copyright.selected", true) app.copyTags(tagv2) } if (!nextFile()) { Qt.quit() } else { setTimeout(doWork, 1) } } firstFile() doWork() } } More example scripts come with Kid3 and are already registered as user commands. QML API¶The API can be easily explored using the QML Console, which is available as an example script with a user interface. Kid3Script Kid3Script is a regular QML component located inside the plugin directory. You could use another QML component just as well. Using Kid3Script makes it easy to start the script function using the onRun signal handler. Moreover it offers some functions: onRun: Signal handler which is invoked when the script is started tagv1, tagv2, tagv2v1: Constants for tag parameters script: Access to scripting functions configs: Access to configuration objects getArguments(): List of script arguments isStandalone(): true if the script was not started from within Kid3 setTimeout(callback, delay): Starts callback after delay ms firstFile(): To first selected file nextFile(): To next selected file Scripting Functions As JavaScript and therefore QML too has only a limited set of functions for scripting, the script object has some additional methods, for instance: script.properties(obj): String with Qt properties script.writeFile(filePath, data): Write data to file, true if OK script.readFile(filePath): Read data from file script.removeFile(filePath): Delete file, true if OK script.fileExists(filePath): true if file exists script.fileIsWritable(filePath): true if file is writable script.getFilePermissions(filePath): Get file permission mode bits script.setFilePermissions(filePath, modeBits): Set file permission mode bits script.classifyFile(filePath): Get class of file (dir "/", symlink "@", exe "*", file " ") script.renameFile(oldName, newName): Rename file, true if OK script.copyFile(source, dest): Copy file, true if OK script.makeDir(path): Create directory, true if OK script.removeDir(path): Remove directory, true if OK script.tempPath(): Path to temporary directory script.musicPath(): Path to music directory script.listDir(path, [nameFilters], [classify]): List directory entries script.system(program, [args], [msecs]): Synchronously start a system command, [exit code, standard output, standard error] if not timeout script.systemAsync(program, [args], [callback]): Asynchronously start a system command, callback will be called with [exit code, standard output, standard error] script.getEnv(varName): Get value of environment variable script.setEnv(varName, value): Set value of environment variable script.getQtVersion(): Qt version string, e.g. "5.4.1" script.getDataMd5(data): Get hex string of the MD5 hash of data script.getDataSize(data): Get size of byte array script.dataToImage(data, [format]): Create an image from data bytes script.dataFromImage(img, [format]): Get data bytes from image script.loadImage(filePath): Load an image from a file script.saveImage(img, filePath, [format]): Save an image to a file, true if OK script.imageProperties(img): Get properties of an image, map containing "width", "height", "depth" and "colorCount", empty if invalid image script.scaleImage(img, width, [height]): Scale an image, returns scaled image Application Context Using QML, a large part of the Kid3 functions are accessible. The API is similar to the one used for D-Bus. For details, refer to the respective notes. app.openDirectory(path): Open directory app.unloadAllTags(): Unload all tags app.saveDirectory(): Save directory app.revertFileModifications(): Revert app.importTags(tag, path, fmtIdx): Import file app.importFromTags(tag, source, extraction): Import from tags app.importFromTagsToSelection(tag, source, extraction): Import from tags of selected files app.downloadImage(url, allFilesInDir): Download image app.exportTags(tag, path, fmtIdx): Export file app.writePlaylist(): Write playlist app.getPlaylistItems(path): Get items of a playlist app.setPlaylistItems(path, items): Set items of a playlist app.selectAllFiles(): Select all app.deselectAllFiles(): Deselect app.firstFile([select], [onlyTaggedFiles]): To first file app.nextFile([select], [onlyTaggedFiles]): To next file app.previousFile([select], [onlyTaggedFiles]): To previous file app.selectCurrentFile([select]): Select current file app.selectFile(path, [select]): Select a specific file app.getSelectedFilePaths([onlyTaggedFiles]): Get paths of selected files app.requestExpandFileList(): Expand all app.applyFilenameFormat(): Apply filename format app.applyTagFormat(): Apply tag format app.applyTextEncoding(): Apply text encoding app.numberTracks(nr, total, tag, [options]): Number tracks app.applyFilter(expr): Filter app.convertToId3v23(): Convert ID3v2.4.0 to ID3v2.3.0 app.convertToId3v24(): Convert ID3v2.3.0 to ID3v2.4.0 app.getFilenameFromTags(tag): Filename from tags app.getTagsFromFilename(tag): Filename to tags app.getAllFrames(tag): Get object with all frames app.getFrame(tag, name): Get frame app.setFrame(tag, name, value): Set frame app.getPictureData(): Get data from picture frame app.setPictureData(data): Set data in picture frame app.copyToOtherTag(tag): Tags to other tags app.copyTags(tag): Copy app.pasteTags(tag): Paste app.removeTags(tag): Remove app.playAudio(): Play app.readConfig(): Read configuration app.applyChangedConfiguration(): Apply configuration app.dirName: Directory name app.selectionInfo.fileName: File name app.selectionInfo.filePath: Absolute file path app.selectionInfo.detailInfo: Format details app.selectionInfo.tag(Frame.Tag_1).tagFormat: Tag 1 format app.selectionInfo.tag(Frame.Tag_2).tagFormat: Tag 2 format app.selectionInfo.formatString(tag, format): Substitute codes in format string app.selectFileName(caption, dir, filter, saveFile): Open file dialog to select a file app.selectDirName(caption, dir): Open file dialog to select a directory For asynchronous operations, callbacks can be connected to signals. function automaticImport(profile) { function onAutomaticImportFinished() { app.batchImporter.finished.disconnect(onAutomaticImportFinished) } app.batchImporter.finished.connect(onAutomaticImportFinished) app.batchImport(profile, tagv2) } function renameDirectory(format) { function onRenameActionsScheduled() { app.renameActionsScheduled.disconnect(onRenameActionsScheduled) app.performRenameActions() } app.renameActionsScheduled.connect(onRenameActionsScheduled) app.renameDirectory(tagv2v1, format, false) } Configuration Objects The different configuration sections are accessible via methods of configs. Their properties can be listed in the QML console. script.properties(configs.networkConfig()) Properties can be set: configs.networkConfig().useProxy = false configs.batchImportConfig() configs.exportConfig() configs.fileConfig() configs.filenameFormatConfig() configs.filterConfig() configs.findReplaceConfig() configs.guiConfig() configs.importConfig() configs.mainWindowConfig() configs.networkConfig() configs.numberTracksConfig() configs.playlistConfig() configs.renDirConfig() configs.tagConfig() configs.tagFormatConfig() configs.userActionsConfig() AUTHOR¶Urs Fleisch <ufleisch at users.sourceforge.net> FDL NOTES¶ - 1. - gnudb.org - 2. - TrackType.org - 3. - MusicBrainz - 4. - Discogs - 5. - Amazon - 6. - ID3 specification - 7. - SYLT Editor - 8. - - 9. - freedb.org - 10. - ID3 tag version 2.3.0 - 11. - ID3 tag version 2.4.0 - Main Structure - 12. - LyricWiki - 13. - 14. - id3lib - 15. - libogg - 16. - libvorbis, libvorbisfile - 17. - libFLAC++ and libFLAC - 18. - TagLib - 19. - mp4v2 - 20. - Chromaprint - 21. - libav - 22. - FDL - 23. - GPL - 24. - Qt - 25. - KDE
https://manpages.debian.org/unstable/kid3-core/kid3-core.1.en.html
CC-MAIN-2019-47
refinedweb
14,672
64.91
The, if you are building a product catalog app, then you can use the ListView control to display the list of products. The ListView control supports several advanced features that I plan to discuss in future blog entries. For example, you can group the items in a ListView, you can create master/details views with a ListView, and you can efficiently work with large sets of items with a ListView. In this blog entry, we’ll keep things simple and focus on displaying a list of products. There are three things that you need to do in order to display a list of items with a ListView: - Create a data source - Create an Item Template - Declare the ListView Creating the ListView Data Source The first step is to create (or retrieve) the data that you want to display with the ListView. In most scenarios, you will want to bind a ListView to a WinJS.Binding.List object. The nice thing about the WinJS.Binding.List object is that it enables you to take a standard JavaScript array and convert the array into something that can be bound to the ListView. It doesn’t matter where the JavaScript array comes from. It could be a static array that you declare or you could retrieve the array as the result of an Ajax call to a remote server. The following JavaScript file – named products.js – contains a list of products which can be bound to a ListView. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99 }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44 }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The products variable represents a WinJS.Binding.List object. This object is initialized with a plain-old JavaScript array which represents an array of products. To avoid polluting the global namespace, the code above uses the module pattern and exposes the products using a namespace. The list of products is exposed to the world as ListViewDemos.products. To learn more about the module pattern and namespaces in WinJS, see my earlier blog entry: Creating the ListView Item Template The ListView control does not know how to render anything. It doesn’t know how you want each list item to appear. To get the ListView control to render something useful, you must create an Item Template. Here’s what our template for rendering an individual product looks like: <div id="productTemplate" data- <div class="product"> <span data-</span> <span data-</span> </div> </div> This template displays the product name and price from the data source. Normally, you will declare your template in the same file as you declare the ListView control. In our case, both the template and ListView are declared in the default.html file. To learn more about templates, see my earlier blog entry: Declaring the ListView The final step is to declare the ListView control in a page. Here’s the markup for declaring a ListView: <div data- </div> You declare a ListView by adding the data-win-control to an HTML DIV tag. The data-win-options attribute is used to set two properties of the ListView. The ListView is associated with its data source with the itemDataSource property. Notice that the data source is ListViewDemos.products.dataSource and not just ListViewDemos.products. You need to associate the ListView with the dataSoure property. The ListView is associated with its item template with the help of the itemTemplate property. The ID of the item template — #productTemplate – is used to select the template from the page. Here’s what the complete version of the default.html page looks like: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ListViewDemos</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- ListViewDemos references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script src="/js/products.js" type="text/javascript"></script> <style type="text/css"> .product { width: 200px; height: 100px; border: white solid 1px; } </style> </head> <body> <div id="productTemplate" data- <div class="product"> <span data-</span> <span data-</span> </div> </div> <div data- </div> </body> </html> Notice that the page above includes a reference to the products.js file: <script src=”/js/products.js” type=”text/javascript”></script> The page above also contains a Template control which contains the ListView item template. Finally, the page includes the declaration of the ListView control. Summary The goal of this blog entry was to describe the minimal set of steps which you must complete to use the WinJS ListView control to display a simple list of items. You learned how to create a data source, declare an item template, and declare a ListView control. Discussion
https://stephenwalther.com/archive/2012/05/18/metro-introduction-to-the-winjs-listview-control
CC-MAIN-2021-31
refinedweb
846
67.15
John, Your change makes just the server function to return the same number that it receives. My server implementation multiplied the incoming number by two. My question was why does the output from writer shows first all output from server, then followed by all output from client. I would like to see output from client and server to alternate in the order in which the computation occurs, namely [server, client, server, client, server, client, .]. I know that for finite sequences, it is easily possible to reorder the output from writer. However, it is not possible to reorder the output if the sequence is infinite. That can be seen in my example program if it is modified by removing the statement "take 10". In that case the program never prints output from clients, the output printer will be only from the server. Jan From: John Vogel [mailto:jpvogel1 at gmail.com] Sent: Monday, January 14, 2008 8:33 PM To: Jan Stranik Cc: haskell at haskell.org Subject: Re: [Haskell] Simulating client server communication with recursive monads If you redefine as follows: server :: [Integer] -> Writer [String] [Integer] server [] = return [] server (a:as) = do tell ["Server " ++ show a] rs <- server as return (a:rs) You get this for the output: ["Server 0","Server 1","Server 2","Server 3","Server 4","Server 5","Server 6","Server 7","Server 8","Server 9","Server 1 0","Client 0","Client 1","Client 2","Client 3","Client 4","Client 5","Client 6","Client 7","Client 8","Client 9"] Then you just need to alternate the pattern. Though a real simulation of sever traffic would account for the packets not being recieved, rerequested, or recieved out of order. It also depends whether you are simulating TCP or UDP. On 1/14/08, Jan Stranik <janstranik at yahoo.de> wrote: Hello, I am trying to simulate a client server traffic using recursive lazy evaluation. I am trying to do that in a recursive writer monad. Following code is my attempt to simulate client server interaction and collect its transcript: {-# OPTIONS -fglasgow-exts #-} module Main where import Control.Monad.Writer.Lazy simulation:: Writer [String] () simulation = mdo a <- server cr cr <- client $ take 10 a return () server:: [Integer] -> Writer [String] [Integer] server (a:as) = do tell ["server " ++ show a] rs <- server as return ((a*2):rs) server [] = return [] client:: [Integer] -> Writer [String] [Integer] client as = do dc <- doClient as return (0:dc) where doClient (a:as) = do tell ["Client " ++ show a] as' <- doClient as return ((a+1):as') doClient [] = return [] main = return $ snd $ runWriter simulation The problem that I see is that the transcript collected contains first all output from the server, and then output from the client. Here is an example of output that I see: :["server 0","server 1","server 3","server 7","server 15","server 31","server 63","server 127","server 255","server 511","server 1023","Client 0","Client 2","Client 6","Client 14","Client 30","Client 62","Client 126","Client 254","Client 510","Client 1022"] I would like to collect the output like: :["client 0","server 0", "client 1",.] This would allow me to remove the ending condition in simulation (take 10), and instead rely fully on lazy evaluation to collect as many simulation steps as needed by my computation. I am still relatively new to the concepts of recursive monadic computations, so I would appreciate any suggestions from experts on this mailing list. Thank you Jan _______________________________________________ Haskell mailing list Haskell at haskell.org -------------- next part -------------- An HTML attachment was scrubbed... URL:
http://www.haskell.org/pipermail/haskell/2008-January/020133.html
CC-MAIN-2014-10
refinedweb
586
56.79
Introduction. I'll also cover nonstandard technologies, such as XMLHttpRequest and rich text editing, that Mozilla does support because no W3C equivalent existed at the time. They include: - HTML 4.01, XHTML 1.0 and XHTML 1.1 - Cascade Style Sheets (CSS): CSS Level 1, CSS Level 2 1.0 - XML Path Language: XPath 1.0 - Resource Description Framework: RDF - Simple Object Access Protocol: SOAP 1.1 - ECMA-262, revision 3 (JavaScript 1.5): EC MA-262 General cross-browser coding tips Even though Web standards do exist, different browsers behave differently (in fact, the same browser may behave differently depending on the platform). Many browsers, such as Internet Explorer, also support pre-W3C APIs and have never added extensive support for the W3C-compliant ones. Before I go into the differences between Mozilla and Internet Explorer, I'll cover some basic ways you can make a Web application extensible in order to add new browser support later. Since different browsers sometimes use different APIs for the same functionality, you can often find multiple if() else() blocks throughout the code to differentiate between the browsers. The following code shows blocks designated for Internet Explorer: . . . var elm; if (ns4) elm = document.layers["myID"]; else if (ie4) elm = document.all["myID"] The above code isn't extensible, so if you want it to support a new browser, you must update these blocks throughout the Web application. The easiest way to eliminate the need to recode for a new browser is to abstract out functionality. Rather than multiple if() else() blocks, you increase efficiency by taking common tasks and abstracting them out into their own functions. Not only does this make the code easier to read, it simplifies adding support for new clients: method, such as Opera or Safari, to work without any changes. Useragent sniffing, however, makes sense when accuracy is important, such as when you're verifying that a browser meets the Web application's version requirements or you are trying to work around a bug. JavaScript also allows inline conditional statements, which can help with code readability: Legacy browsers introduced tooltips into HTML by showing them on links and using the value of the alt attribute as a tooltip's content. The latest W3C HTML specification created the title attribute, which is meant to contain a detailed description of the link. Modern browsers will use the title attribute to display tooltips, and Mozilla only supports showing tooltips for that attribute and not the alt attribute. The Document Object Model (DOM) is the tree structure that contains the document elements. You can manipulate it through JavaScript APIs, which the W3C has standardized. However, prior to W3C standardization, Netscape 4 and Internet Explorer 4 implemented the APIs similarly. Mozilla only implements legacy APIs if they are unachievable with W3C web standards. Accessing elements To retrieve an element reference using the cross-browser approach, you use document.getElementById(aID), which works in Internet Explorer 5.0+, Mozilla-based browsers, other W3C-compliant browsers and is part of the DOM Level 1 specification. Mozilla does not support accessing an element through document.elementName or even through the element's name, which Internet Explorer does (also called global namespace polluting). Mozilla also does not support the Netscape 4 document.layers method and Internet Explorer's document.all. While document.getElementById lets you retrieve one element, you can also use document.layers and document.all to obtain a list of all document elements with a certain tag name, such as all <div> elements. The W3C DOM Level 1 method gets references to all elements with the same tag name through getElementsByTagName(). The method returns an array in JavaScript, and can be called on the document element or other nodes to search only their subtree. To get an array of all elements in the DOM tree, you can use getElementsByTagName("*"). The DOM Level 1 methods, as shown in Table 1, are commonly used to move an element to a certain position and toggle its visibility (menus, animations). Netscape 4 used the <layer> tag, which Mozilla does not support, as an HTML element that can be positioned anywhere. In Mozilla, you can position any element using the <div> tag, which Internet Explorer uses as well and which you'll find in the HTML specification. Traverse the DOM Mozilla supports the W3C DOM APIs for traversing the DOM tree through JavaScript (see Table 2). The APIs exist for each node in the document and allow walking the tree in any direction. Internet Explorer supports these APIs as well, but it also supports its legacy APIs for walking a DOM tree, such as the children property. Internet Explorer has a nonstandard quirk, where many of these APIs will skip white space text nodes that are generated, for example, by new line characters. Mozilla will not skip these, so sometimes you need to distinguish these nodes. Every node has a nodeType property specifying the node type. For example, an element node has type 1, while a text node has type 3 and a comment node is type 8. The best way to only process element nodes is to iterate over all child nodes and only process those with a nodeType of 1: HTML: <div id="foo"> <span>Test</span> </div> JavaScript: var myDiv = document.getElementById("foo"); var myChildren = myXMLDoc.childNodes; for (var i = 0; i < myChildren.length; i++) { if (myChildren[i].nodeType == 1){ // element node }; }; Generate and manipulate content Mozilla supports the legacy methods for adding content into the DOM dynamically, such as document.write, document.open and document.close. Mozilla also supports Internet Explorer's innerHTML method, which it can call on almost any node. It does not, however, support outerHTML (which adds markup around an element, and has no standard equivalent) and innerText (which sets the text value of the node, and which you can achieve in Mozilla by using textContent). Internet Explorer has several content manipulation methods that are nonstandard and unsupported in Mozilla, including retrieving the value, inserting text and inserting elements adjacent to a node, such as getAdjacentElement and insertAdjacentHTML. Table 3 shows how the W3C standard and Mozilla manipulate content, all of which are methods of any DOM node. Document fragments For performance reasons, you can create documents in memory, rather than working on the existing document's DOM. DOM Level 1 Core introduced document fragments, which are lightweight documents that contain a subset of a normal document's interfaces. For example, getElementById does not exist, but appendChild does. You can also easily add document fragments to existing documents. Mozilla creates document fragments through document.createDocumentFragment(), which returns an empty document fragment. Internet Explorer's implementation of document fragments, however, does not comply with the W3C web standards and simply returns a regular document. JavaScript differences Most differences between Mozilla and Internet Explorer are usually blamed on JavaScript. However, the issues usually lie in the APIs that a browser exposes to JavaScript, such as the DOM hooks. The two browsers possess few core JavaScript differences; issues encountered are often timing related. JavaScript date differences The only Date difference is the getYear method. As per the ECMAScript specification (which is the specification JavaScript follows), the method is not Y2k-compliant, and running new Date().getYear() in 2004 will return "104". Per the ECMAScript specification, getYear returns the year minus 1900, originally meant to return "98" for 1998. getYear was deprecated in ECMAScript Version 3 and replaced with getFullYear(). Internet Explorer changed getYear() to work like getFullYear() and make it Y2k-compliant, while Mozilla kept the standard behavior. JavaScript execution differences Different browsers execute JavaScript differently. For example, the following code assumes that the div node already exists in the DOM by the time the script block executes: ... .innerHTML = "Done."; } </script> ... Such timing-related issues are also hardware-related -- slower systems can reveal bugs that faster systems hide. One concrete example is window.open, which opens a new window: <script> function doOpenWindow(){ var myWindow = window.open("about:blank"); myWindow.location.href = ""; } </script> The problem with the code is that window.open is asynchronous -- it does not block the JavaScript execution until the window has finished loading. Therefore, you may execute the line after the window.open line before the new window has finished. You can deal with this by having an onload handler in the new window and then call back into the opener window (using window.opener). Differences in JavaScript-generating HTML JavaScript can, through document.write, generate HTML on the fly from a string. The main issue here is when JavaScript, embedded inside an HTML document (thus, inside an <script> tag), generates HTML that contains a <script> tag. If the document is in strict rendering mode, it will parse the </script> inside the string as the closing tag for the enclosing <script>. The following code illustrates this best: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> ... <script> document.write("<script type='text\/javascript'>alert('Hello');<\/script>") </script> Since the page is in strict mode, Mozilla's parser will see the first <script> and parse until it finds a closing tag for it, which would be the first </script>. This is because the parser has no knowledge about JavaScript (or any other language) when in strict mode. In quirks mode, the parser is aware of JavaScript when parsing (which slows it down). Internet Explorer is always in quirks mode, as it does not support true XHTML. To make this work in strict mode in Mozilla, separate the string into two parts: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> ... <script> document.write("<script type='text\/javascript'>alert('Hello');</" + "script>") </script> Debug JavaScript Mozilla provides several ways to debug JavaScript-related issues found in applications created for Internet Explorer. The first tool is the built-in JavaScript console, shown in Figure 1, where errors and warnings are logged. You can access it in Mozilla by going to Tools -> Web Development -> JavaScript Console or in Firefox (the standalone browser product from Mozilla) at Tools -> JavaScript Console. Figure 1. JavaScript console The JavaScript console can show the full log list or just errors, warnings, and messages. The error message in Figure 1 says that at aol.com, line 95 tries to access an undefined variable called is_ns70. Clicking on the link will open Mozilla's internal view source window with the offending line highlighted. The console also allows you to evaluate JavaScript. To evaluate the entered JavaScript syntax, type in 1+1 into the input field and press Evaluate, as Figure 2 shows. Figure 2. JavaScript console evaluating Mozilla's JavaScript engine has built-in support for debugging, and thus can provide powerful tools for JavaScript developers. Venkman, shown in Figure 3, is a powerful, cross-platform JavaScript debugger that integrates with Mozilla. It is usually bundled with Mozilla releases; you can find it at Tools -> Web Development -> JavaScript Debugger. For Firefox, the debugger isn't bundled; instead, you can download and install it from and parts of CSS3, compared to Internet Explorer as well as all other browsers. For most issues mentioned below, Mozilla will add an error or warning entry into the JavaScript console. Check the JavaScript console if you encounter CSS-related issues. Mimetypes (when CSS files are not applied) The most common CSS-related issue is that CSS definitions inside referenced CSS files are not applied. This is usually due to the server sending the wrong mimetype for the CSS file. The CSS specification states that CSS files should be served with the text/css mimetype. Mozilla will respect this and only load CSS files with that mimetype if the Web page is in strict standards mode. Internet Explorer will always load the CSS file, no matter with which mimetype it is served. Web pages are considered in strict standards mode when they start with a strict doctype. To solve this problem, you can make the server send the right mimetype or remove the doctype. I'll discuss more about doctypes in the next section.> Since the above example has a strict doctype, the page is rendered in strict standards mode. The first div will have a width of 40px, since it uses units, but the second div won't get a width, and thus will default to 100% width. The same would apply if the width were set through JavaScript. CSS added the notion of overflow, which allows you to define how to handle overflow; for example, when the contents of a div with a specified height are taller than that height. The CSS standard defines that if no overflow behavior is set in this case, the div contents will overflow. However, Internet Explorer does not comply with this and will expand the div beyond its set height in order to hold the contents. Below is an example that shows this difference: . </a> Mozilla follows the CSS specification correctly and will change the color to green in this example. You can use two ways to make Mozilla behave like Internet Explorer and not change the color of the text when hovered over: - First, you can change the CSS rule to be a:link:hover {color: green;}, which will only change the color if the element is a link (has an hrefattribute). - Alternatively, you can change the markup and close the opened <a />before the start of the text -- the anchor will continue to work this way. Quirks versus standards mode Older legacy browsers, such as Internet Explorer 4, rendered with so-called quirks under certain conditions. While Mozilla aims to be a standards-compliant browser, it has three modes that support older Web pages created with these quirky behaviors. The page's content and delivery determine which mode Mozilla will use. Mozilla will Standards mode is the strictest rendering mode -- it will render pages per the W3C HTML and CSS specifications and will not support any quirks. Mozilla uses it for the following conditions: - If a page is sent with a text/xmlmimetype or any other XML or XHTML mimetype - For any "DOCTYPE HTML SYSTEM" doctype (for example, <!DOCTYPE HTML SYSTEM "">), except for the IBM doctype - For unknown doctypes or doctypes without DTDs Almost standards mode Mozilla introduced almost standards mode for one reason: a section in the CSS 2 specification breaks designs based on a precise layout of small images in table cells. Instead of forming one image to the user, each small image ends up with a gap next to it. The old IBM homepage shown in Figure 5 offers an example. Figure 5. Image gap Almost standards mode behaves almost exactly as standards mode, except when it comes to an image gap issue. The issue occurs often on standards-compliant pages and causes them to display incorrectly. Mozilla uses almost standards mode for the following conditions: - For any "loose" doctype (for example, <!DOCTYPE HTML PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">, <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "">) - For the IBM doctype ( <!DOCTYPE html SYSTEM "">) You can read more about the image gap issue. Quirks mode Currently, the Web is full of invalid HTML markup, as well as markup that only functions due to bugs in browsers. The old Netscape browsers, when they were the market leaders, had bugs. When Internet Explorer arrived, it mimicked those bugs in order to work with the content at that time. As newer browsers came to market, most of these original bugs, usually called quirks, were kept for backwards compatibility. Mozilla supports many of these in its quirks rendering mode. Note that due to these quirks, pages will render slower than if they were fully standards-compliant. Most Web pages are rendered under this mode. Mozilla uses quirks mode for the following conditions: - When no doctype is specified - For doctypes without a system identifier (for example, <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">) For further reading, check out: Mozilla Quirks Mode Behavior and Mozilla's DOCTYPE sniffing. Event differences Mozilla and Internet Explorer are almost completely different in the area of events. The Mozilla event model follows the W3C and Netscape model. In Internet Explorer, if a function is called from an event, it can access the event object through window.event. Mozilla passes an event object to event handlers. They must specifically pass the object on to the function called through an argument. A cross-browser event handling example follows > Mozilla fully supports the W3C standard way of attaching listeners to DOM nodes. You use the addEventListener() and removeEventListener() methods, and have the benefit of being able to set multiple listeners for the same event type. Both methods require three parameters: the event type, a function reference, and a boolean denoting whether the listener should catch events in their capture phase. If the boolean is set to false, it will only catch bubbling events. W3C events have three phases: capturing, at target, and bubbling. Every event object has an eventPhase attribute indicating the phase numerically (0 indexed). Every time you trigger an event, the event starts at the DOM's outermost element, the element at the top of the DOM tree. It then walks the DOM using the most direct route toward the target, which is the capturing phase. When the event reaches the target, the event is in the target phase.> One advantage of addEventListener() and removeEventListener() over setting properties is that you can have multiple event listeners for the same event, each calling another function. Thus, to remove an event listener requires all three parameters be the same as the ones you use when adding the listener. Mozilla does not support Internet Explorer's method of converting <script> tags into event handlers, which extends <script> with for and event attributes (see Table 5). It also does not support the attachEvent and detachEvent methods. Instead, you should use the addEventListener and removeEventListener methods. Internet Explorer does not support the W3C events specification. Rich text editing While Mozilla prides itself with being the most W3C web standards compliant browser, it does support nonstandard functionality, such as innerHTML and rich text editing, if no W3C equivalent exists. Mozilla 1.3 introduced an implementation of Internet Explorer's designMode feature, which turns an HTML document into a rich text editor field. Once turned into the editor, commands can run on the document through the execCommand command. Mozilla does not support Internet Explorer's contentEditable attribute for making any widget editable. You can use an iframe to add a rich text editor.> function getIFrameDocument(aID) { var rv = null; // if contentDocument exists, W3C compliant (Mozilla) if (document.getElementById(aID).contentDocument){ rv = document.getElementById(aID).contentDocument; } else { // IE rv = document.frames[aID].document; } return rv; } </script> Another difference between Mozilla and Internet Explorer is the HTML that the rich text editor creates. Mozilla defaults to using CSS for the generated markup. However, Mozilla allows you to toggle between HTML and CSS mode using the useCSS execCommand and toggling it between true and false. Internet Explorer always uses HTML markup. As with standard HTML, Mozilla supports the W3C XML DOM specification, which allows you to manipulate almost any aspect of an XML document. Differences between Internet Explorer's XML DOM and Mozilla are usually caused by Internet Explorer's nonstandard behaviors. Probably the most common difference is how they handle white space text nodes. Often when XML generates, it contains white spaces between XML nodes. Internet Explorer, when using Node.childNodes, will not contain these white space nodes. In Mozilla, those nodes will be in the array. XML: <?xml version="1.0"?> <myXMLdoc xmlns: <myns:foo>bar</myns:foo> </myXMLdoc> JavaScript: var myXMLDoc = getXMLDocument().documentElement; alert(myXMLDoc.childNodes.length); The first line of JavaScript loads the XML document and accesses the root element ( myXMLDoc) by retrieving the documentElement. The second line simply alerts the number of child nodes. Per the W3C specification, the white spaces and new lines merge into one text node if they follow each other. For Mozilla, the myXMLdoc node has three children: a text node containing a new line and two spaces; the myns:foo node; and another text node with a new line. Internet Explorer, however, does not abide by this and will return "1" for the above code, namely only the myns:foo node. Therefore, to walk the child nodes and disregard text nodes, you must distinguish such nodes. As mentioned earlier, every node has a nodeType property representing the node type. For example, an element node has type 1, while a document node has type 9. To disregard text nodes, you must check for types 3 (text node) and 8 (comment node). XML: <?xml version="1.0"?> <myXMLdoc xmlns: <myns:foo>bar</myns:foo> </myXMLdoc> JavaScript: var myXMLDoc = getXMLDocument().documentElement; var myChildren = myXMLDoc.childNodes; for (var run = 0; run < myChildren.length; run++){ if ( (myChildren[run].nodeType != 3) && myChildren[run].nodeType != 8) ){ // not a text or comment node }; }; See Whitespace in the DOM for more detailed discussion and a possible solution. XML data islands Internet Explorer has a nonstandard feature called XML data islands, which allow you to embed XML inside an HTML document using the nonstandard HTML tag <xml>. Mozilla does not support XML data islands and handles them as unknown HTML tags. You can achieve the same functionality using XHTML; however, because Internet Explorer's support for XHTML is weak, this is usually not an option. Internet Explorer allows you to send and retrieve XML files using MSXML's XMLHTTP class, which is instantiated through ActiveX using new ActiveXObject("Msxml2.XMLHTTP") or new ActiveXObject("Microsoft.XMLHTTP"). Since there is no standard method of doing this, Mozilla provides the same functionality in the global JavaScript XMLHttpRequest object. Since version 7 IE also supports the "native" XMLHttpRequest object. After instantiating the object using new XMLHttpRequest(), you can use the open method to specify what type of request (GET or POST) you use, which file you load, and if it is asynchronous or not. If the call is asynchronous, then give the onload member a function reference, which is called once the request has completed. Synchronous request: var myXMLHTTPRequest = new XMLHttpRequest(); myXMLHTTPRequest.open("GET", "data.xml", false); myXMLHTTPRequest.send(null); var myXMLDocument = myXMLHTTPRequest.responseXML; Asynchronous request: var myXMLHTTPRequest; function xmlLoaded() { var myXMLDocument = myXMLHTTPRequest.responseXML; } function loadXML(){ myXMLHTTPRequest = new XMLHttpRequest(); myXMLHTTPRequest.open("GET", "data.xml", true); myXMLHTTPRequest.onload = xmlLoaded; myXMLHTTPRequest.send(null); } with an XML mimetype ( text/xml or application/xml). This is the most common reason why XSLT won't run in Mozilla but will in Internet Explorer. Mozilla is strict in that way. Internet Explorer 5.0 and 5.5 supported XSLT's working draft, which is substantially different than the final 1.0 recommendation. The easiest way to distinguish what version an XSLT file was written against is to look at the namespace. The namespace for the 1.0 recommendation is, while the working draft's namespace is. Internet Explorer 6 supports the working draft for backwards compatibility, but Mozilla does not support the working draft, only the final recommendation. If XSLT requires you to distinguish the browser, you can query the "xsl:vendor" system property. Mozilla's XSLT engine will report itself as "Transformiix" and Internet Explorer will return "Microsoft". <xsl:if <!-- Mozilla specific markup --> </xsl:if> <xsl:if <!-- Internet Explorer specific markup --> </xsl:if> Mozilla also provides JavaScript interfaces for XSLT, allowing a Web site to complete XSLT transformations in memory. You can do this using the global XSLTProcessor JavaScript object. XSLTProcessor requires you to load the XML and XSLT files, because it needs their DOM documents. The XSLT document, imported by the XSLTProcessor, allows you to manipulate XSLT parameters. XSLTProcessor can generate a standalone document using transformToDocument(), or it can create a document fragment using transformToFragment(), which you can easily append into another DOM document. Below is an example: var xslStylesheet; var xsltProcessor = new XSLTProcessor(); // load the xslt file, example1.xsl var myXMLHTTPRequest = new XMLHttpRequest(); myXMLHTTPRequest.open("GET", "example1.xsl", false); myXMLHTTPRequest.send(null); // get the XML document and import it xslStylesheet = myXMLHTTPRequest.responseXML; xsltProcessor.importStylesheet(xslStylesheet); // load the xml file, example1.xml myXMLHTTPRequest = new XMLHttpRequest(); myXMLHTTPRequest.open("GET", "example1.xml", false); myXMLHTTPRequest.send(null); var xmlSource = myXMLHTTPRequest.responseXML; var resultDocument = xsltProcessor.transformToDocument(xmlSource); After creating an XSLTProcessor, you load the XSLT file using XMLHttpRequest. The XMLHttpRequest's responseXML member contains the XML document of the XSLT file, which is passed to importStylesheet. You then use the XMLHttpRequest again to load the source XML document that must be transformed; that document is then passed to the transformToDocument method of XSLTProcessor. Table 8 features a list of XSLTProcessor methods. Original Document Information - Author(s): Doron Rosenberg, IBM Corporation - Published: 26 Jul 2005 - Link:
https://developer.mozilla.org/en-US/docs/Migrate_apps_from_Internet_Explorer_to_Mozilla$revision/132653
CC-MAIN-2015-11
refinedweb
4,113
56.15
I wish to use the awesome power of Blender in a programmatic way, within my web application. I know I can run something of the sort from the command line: , but I wish to maintain the security and cleanliness of my code by not invoking command line calls. Code: Select all blender foo.blend -P bar.py My web application is written in Python, hence my question is: what is the simplest way to separate the Blender python modules from Blender itself. From what I understand Blender has a custom Python interpreter built into it, which is how it connects with the extended Python modules (written in C). To me this does not sound like good news for my goal, but does anyone know a way? To be crystal clear, I want to be able to do this: ...without having to invoke 'blender' on the command line. Code: Select all import bpy #Web application code... bpy.ops.import_scene.obj(...) #Web application code... Cheers, Matt
https://www.blender.org/forum/viewtopic.php?f=9&t=26504&p=102637
CC-MAIN-2019-43
refinedweb
164
73.68
namespace Model { } namespace View { } namespace Controller {} In the Richman, Inc. example I could add each component to its own assembly but use the appropriate namespace across the components in the model, view and controller. It is also possible (although it cannot be done through Visual Studio) to add more than one component in each assembly and have three assemblies for model, view, and controller respectively. Depending upon the size of the program it is best to remember that the fewer the assemblies the easier it is to deploy. So far I have gone from class diagrams to deployment diagrams. The link between the abstract classes and how they are deployed are components that contain multiple classes and are grouped logically so that they can be deployed. The final article in this series, coming next month, will cover sequence diagrams. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/enterprise/Article/28296/0/page/3
CC-MAIN-2014-15
refinedweb
169
55.95
VGET(9) BSD Kernel Manual VGET(9) vget - get a vnode from the free list #include <sys/param.h> #include <sys/vnode.h> int vget(struct vnode *vp, int flags, struct proc *p); Get a vnode from the free list and increment its reference count. Its arguments are: vp The vnode to remove from the free list. flags If non-zero, the vnode will also be locked. p The process responsible for this call. When not in use, vnodes are kept on a free list. The vnodes still refer- ence. [ENOENT] The vnode vp is in the process of being cleaned out from the underlying file system. vnode(9), vput(9), vref(9), vrele(9) This man page was originally written by Doug Rabson for FreeBSD. MirOS BSD #10-current July 24,.
http://www.mirbsd.org/htman/i386/man9/vget.htm
CC-MAIN-2018-09
refinedweb
132
77.43
Rails 2.0 contains various improvements to overall performance and supports advanced techniques to speed up the application. Among them, the most interesting ones are: JavaScript Bundling JavaScript bundling is the capability to bundle together all the JavaScript files of your application (or part of them) into a single file before sending it to the client. Nowadays, web applications use many JavaScript libraries to provide a better user experience and rich visual effects. The downside is degraded performance because the browser has to perform multiple requests to retrieve every single JavaScript library. By bundling all of them together, Rails 2.0 can speed up the delivery of JavaScript files since the browser needs to perform only one request. RailTrackr is no different; it requires a lot of JavaScript files to provide its fancy effects. For comparison, Figure 6 shows the time required to access the page that contains all the photosets for a given user without JavaScript bundling and Figure 7 shows how long it takes with JavaScript bundling. Both times are reported by Firebug. Activating JavaScript bundling is dead simple. You need only add the :cache attribute to the javascript_include_tag statement in your view templates, as follows: <%= javascript_include_tag "prototype" , "effects" , "dragdrop" , "instant" , "reflex" , :cache => "railtrackr" %> <!-- Will create a single railtrackr.js file --> You can apply the same technique to stylesheet bundling, acting on the stylesheet_include_tag command. Bundling is enabled when Rails is in production mode and disabled during development, to avoid unnecessary complication to the developer's debugging sessions (and life in general). Cookie-based Sessions Another performance improvement results from Rails 2.0 storing session data in browser cookies instead of using the database or temporary files on the server. This places all the state information on the client, allowing the application to behave statelesslyperformance can then be improved by horizontal scalability. When necessary, you can add more servers, each one hosting the same exact application, to increase throughput without having to worry about maintaining session data in sync across replicas. However, you should avoid this solution when critical data are stored in session or when session is used to store very big objects, even if Rails uses encryption to store session data in cookies. The reason is the browser sends cookies back to the server for every request (so they have to remain lightweight). Domain Multi-targeting Modern browsers usually open only two concurrent connections toward every domain they're accessing. Since the pages of RailTrackr contain lots of images, this may result in a performance impact because of the browser queuing all the requests. With the following statement you can instruct Rails to activate domain multi-targeting and generate URLs for static resources distributed across multiple domains (four by default, whose name is created replacing the %d symbol with the numbers 0 to 3): ActionController::Base.asset_host = "static%d.yoursite.com" The browser will then download eight images concurrently instead of two. To activate this change, you have to configure DNS aliases on your domain first. Query Caching Lastly, Rails 2.0 contains other performance improvements specific to the database layer, such as query caching, to prevent executing the same query multiple times while processing a single request. What About the Database? The sample code is focused only on the changes that Rails 2.0 brought to the user-facing side of application development. It discounts entirely models and database mapping in favor of remote API calls to the Flickr Service. However, the new Rails version contains a whole set of changes to its backend and to the popular ActiveRecord persistence layer. These are beyond the scope of this article, but here are some brief descriptions of the most important changes: What Have You Learned? This article explored the changes Rails 2.0 brought to the web development world. It focused mostly on user-facing aspects, providing full REST support to your application and keeping an eye on performance. It also provided a glimpse at the changes that have occurred to the Rails backend. Among the smaller features Rails 2.0 introduced are improved debugging thanks to the integration of the ruby-debug library, improved JSON support, improved builders for RSS feeds, namespaced routes, HTTP Basic authentication support, and improved security. For further exploration of these and other features, check out the links provided in the Related Resources section in the left column, or just play around with RailTrackr and its source code. Remember, updates will be regularly released at battlehorse.net. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/opensource/Article/37416/0/page/5
CC-MAIN-2017-04
refinedweb
776
54.52
Great news! Welcome IntelliJ IDEA 2016.2.3, a fresh bugfix update for IntelliJ IDEA 2016.2. Check out the release notes to see the list of available fixes. To update, either download and install the new version from scratch, or (if you’re running IntelliJ IDEA 2016.2.2), click Check for Updates and then Download and Install to apply the patch. Develop with Pleasure! Thanks for the update, but now not all keybinding works I use Emacs keybindings and everything were awesome before update. Now for example in “VSC operations” popup I cannot navigate using “CTRL+N” or “CTRL+P”. It now just type “n” or “p” and search for popup items. Terminal do not respond on “CTRL+C”, so I cannot terminate a program, which is important. The only solution now is to close terminal session. Same for me: 1. In Terminal Ctrl+C doesn’t work. Instead I see letters C typed in 2. When I try to type in # using the Swiss keyboard layout (Alt + 3) it doesn’t work. Same for me with UK layout on OSX El Capitan. I have the same problem as Alex, alt+3 does not perform the # symbol which we all require for commenting out things… Same problem with # symbol (UK layout on OSX El Capitan). Extremely annoying problem and no fixes for five days already. Same here. Seriously annoying!! Same for me with Spanish – ISO on OSX El Captitan +1 +1 for UK layout on OSX El Capitan +1 +1 having same issue with ctrl-c on terminal. Beside this great product! IntelliJ IDEA 2016.2.3 JRE: 1.8.0_112-release-b287 x86_64 +1 In Terminal Ctrl+C doesn’t work and no fixes for one week already. Yeah. Same happens for me. I am adapt to writing commands in terminal within intellij. now suddenly when it doesn’t works it is a bit frustrating US keyboard layout on el capitan Same problem, alt+3 outputs 3 and not the # symbol = inability to comment (UK layout on OSX El Capitan + Intellij 2016.2.3) Downgrading to 2016.2.2 fixes this issue Hopefully this bug will be fixed in 2016.2.4 +1 Same problem here. Crazy how such a small thing can make such a large impact. I’m having the same problem with the terminal on OS X El Capitan. Also, after upgrading, it changed the keymap settings to the default. Annoying. What gives guys? ##################################### IntelliJ IDEA 2016.2.3 Build #IU-162.1812.17, built on August 30, 2016 JRE: 1.8.0_112-release-b287 x86_64 JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o same here and it is really annoying Same issue here, alt + 3 no longer works Hi, after the update I don’t see the option keyboard layout option as described in ? Hi, after the update I can’t make a sharp and dollar sign. After this update alt+4 outputs 4 instead of $ when using IntelliJ Idea. $ is outputted on other applications as expected. Mac OSX El Capitan / Finnish Keyboard layout ( “Finnish/Swedish – (KS)”) I will not be forwarding this into any bug tracking software or other. I had to downgrade versions to solve this issue. Confirming that this is an issue with the release. Happens to me as well with a Finnish/Swedish keyboard on OSX El Capitan. Confirmed here too. Swedish keyboard, El Capitan. After update alt+3 outputs 3 instead of # only in Intellij IDEA. Other apps works ok Mac OSX El Capitan Spanish Keyboard I have nothing to add except to confirm this exact same issue. Finnish keyboard and el capitan. The $ problem is fixed in: IntelliJ IDEA 2016.2.4 Build #IU-162.2032.8, built on September 9, 2016 I get an error trying to update from 2016.2.2, luckily the current installation is not affected. Temp. directory: C:\TEMP\ java.lang.OutOfMemoryError at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:326) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122) at ie.wombat.jbdiff.JBPatch.bspatch(JBPatch.java:149) at com.intellij.updater.BaseUpdateAction.applyDiff(BaseUpdateAction.java:112) at com.intellij.updater.UpdateAction.doApply(UpdateAction.java:44) at com.intellij.updater.PatchAction.apply(PatchAction.java:184) at com.intellij.updater.Patch$3.forEach(Patch.java:308) at com.intellij.updater.Patch.forEach(Patch.java:360) at com.intellij.updater.Patch.apply(Patch.java:303) at com.intellij.updater.PatchFileCreator.apply(PatchFileCreator.java:84) at com.intellij.updater.PatchFileCreator.apply(PatchFileCreator.java:75) at com.intellij.updater.Runner.doInstall(Runner.java:304) Running on Win7, tried to clean Temp without help. Please contact our support: and attach the logs. The out of memory error updating can be solved by adding more memory for java through environment variables as usually. CTRL + C no longer working in terminal Please, try this workaround How do I update from version 2016.2.1 ? I only has the download option. Or where can I find a link to upgrade from 2016.2.2 w/o reinstalling it? I was stopped in the process by OutOfMemoryError after the patch was downloaded (Windows 10, 32bit, -Xmx512 was probably too low). But after reading first comments here, I’ll rather wait for 2016.2.4 or something. Please make Ctrl+C work in termnal window (OSX El Capitan) !!!! It stopped working in terminal window after upgrading to 2016.2.3 CTL + C broken in the terminal window, have to kill -9 the app server. Mac OS El Capitan IntelliJ IDEA 2016.2.3 Build #IU-162.1812.17, built on August 30, 2016 … JRE: 1.8.0_112-release-b287 x86_64 JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o +1 Thanks for the update. The performance problems in 2016.2.1 were totally solved! Hash # key isn’t working for me either, just inputs 3 instead. On OSX El Capitan British keyboard layout Has anyone tried this version on Linux/Ubuntu? Seeing all these issues on Capitan makes me hesitant to upgrade, unless the problem is platform dependant Ctrl-r not working too on El Capitan Hi all, After update jsf support has problems. Idea cannot resolve the namespaces in template or decorator files and code completion does not work IntelliJ IDEA 2016.3 is freezing on Mac OS Sierra 10.12 when opening layouts in Design or Text mode. It’s done it about 10 times since I installed it today. Please submit an issue to our tracker with attached CPU snapshot: I can’t get the | (pipe) to work in the internal Terminal of 2016.3. Using a Belgian azerty keyboard layout which requires me to push option + shift + l (L) to get the symbol. Same for me! Same problem with german layout Having the same issue where I can’t enter some characters in the Terminal. Really frustrating and to keep working I had to downgrade to previous version. Problem seems to be caused by pressing OPTION – CMD – SHIFT + … keys to get symbols like {, |, \. Problem occurs as of 2016.3. Downgraded to 2016.2.5 and Terminal works again. I wonder if Windows users are experiencing similar issues. If you can’t wait for a fix you should downgrade to 2016.2.5 I have the same issues with not being able to create a pipe symbol in Terminal with a German layout, using Shift+Alt+7.
https://blog.jetbrains.com/idea/2016/08/intellij-idea-2016-2-3-update-is-out/?replytocom=385707
CC-MAIN-2018-43
refinedweb
1,239
61.53
Project euler 26: Reciprocal cycles A unit fraction contains 1 in the numerator. The decimal representation of the unit fractions with denominators 2 to 10 are given: 1/2 = 0.5 1/3 = 0.(3) 1/4 = 0.25 1/5 = 0.2 1/6 = 0.1(6) 1/7 = 0.(142857) 1/8 = 0.125 1/9 = 0.(1) 1/10 = 0.1 Where 0.1(6) means 0.166666…, and has a 1-digit recurring cycle. It can be seen that 1/7 has a 6-digit recurring cycle. Find the value of d < 1000 for which 1/d contains the longest recurring cycle in its decimal fraction part. Solution: I solved this problem in a bruteforce manner. Divisor keeps incrementing from 2 to 1000. We store the reminder instead of quotient in an array. When the same reminder repeats we know there is a recurring cycle. The maximum of that recurring cycle is the answer. #include <iostream> #include <stdio.h> using namespace std; int max(int a, int b) { return (a > b) ? a: b; } int remArray[2000]; int divideByOne(long divisor) { int term = 0; long mul = 1; for(int i = 0; i < 2000; i++) remArray[i] = 0; while (mul != 0 && term < 2000) { term++; mul *= 10; mul %= divisor; if (remArray[mul] != 0) break; remArray[mul] = term; } return term; } int main() { int max_val = 0; for(int i = 2; i <= 1000; i++) max_val = max(max_val, divideByOne(i)); cout<<max_val; return 0; } time ./a.exe Answer: 983 real 0m0.087s user 0m0.030s sys 0m0.046s project? Solution: This problem is straight forward. A very simple python code as follows def func(): f1 = 1 f2 = 1 term = 2 while len(str(f2)) != 1000: term += 1 f1, f2 = f2, f1 + f2 print term func() time python fibo.py Answer: 4782 real 0m0.276s user 0m0.125s sys 0m0.109s Optimized solution for Project euler: Problem 21 In my previous post I’ve solved the problem 21 by caching the intermediate results. But it can be further optimized for calculating the sum of divisors as follows. According to above method further optimized code is as follows #include <iostream> #include <vector> #include <cmath> using namespace std; #define MAX 100000 vector <long> prime(0); bool * seive_helper(bool primes[], int range) { for (int i = 3; i <= range / 2 ; ++i, ++i) { for(int j = i * 3; j <= range ; j += 2 * i) { primes[j] = true; } } } void seive(int range) { int count = 0; bool primes[range + 1] = {false}; for(int i = 4; i <= range; ++i, ++i) { primes[i] = true; } seive_helper(primes, range); for(int i = 2; i <= range; ++i) { if (!primes[i]) { prime.push_back(i); } } } long long sumdivisors(long num) { long n = num; long p = 2; int i = 0; int count = 0; long sum = 1; long lsum = 0; while(n >= p * p && i < prime.size()) { count = 0; lsum = 1; while (n % p == 0) { n = n / p; count++; lsum *= p; } if (count > 0) sum *= ((lsum * p) - 1) / (p - 1); i++; p = prime[i]; } if (n > 1) //Now cosider the only remaining prime which is greater than the sqrt of given number sum *= (n + 1); return sum - num; } long ami[100000] = {0}; int amicable(int num) { long sum = 0; if (ami[num - 1] != 0) return ami[num - 1]; else { ami[num - 1] = sumdivisors(num); return ami[num - 1]; } } int main() { long long sum = 0; seive((int)sqrt(10000)); for(int i = 2; i <= 10000; i++ ) { if (amicable(i) != i && i == amicable(amicable(i))) { sum += i; } } cout<<"Sum of all amicable numbers: "<<sum; return 0; } Time taken for the above solution is: time ./a.exe Sum of all amicable numbers: 31626 real 0m0.038s user 0m0.000s sys 0m0.030s It is obviously better compared to my previous solution Sum of all amicable numbers: 31626 real 0m0.185s user 0m0.140s sys 0m0.046s Project euler problem 22: Names scores Problem 22 Using names.txt (right click and ‘Save? Solution: Solution is for this problem is straight forward. I have used python for the ease of string operations. apyr=[] def pyr(): with open("names.txt", "r")as out: for i in out: i=i.replace('\"','') apyr = i.split(',') apyr = sorted(apyr) gsum = 0 for i in range(1, len(apyr) + 1): lsum = 0 for j in range(0, len(apyr[i - 1])): lsum += (ord(apyr[i - 1][j]) - 64) gsum += lsum * i print gsum pyr() Answer: 871198282 Time taken for the solution is: real 0m0.160s user 0m0.030s sys 0m0.140s Project
https://vasanthexperiments.wordpress.com/page/2/
CC-MAIN-2017-51
refinedweb
741
77.13
Save plots into a multi-page pdf? I am creating a series of plots that I would like to save into a multi-page pdf file. I found a couple of references about using matplotlib in pylab to do so, but I can't get it to work in SageMath. Is it possible to do this in SageMath? Here's what I tried: from matplotlib.backends.backend_pdf import PdfPages pdf_pages = PdfPages('curves.pdf') p1, p1max, p2, p2max, a = var('p1, p1max, p2, p2max, a') p1max = 10 p2max = 10 for p1 in [1..p1max]: for p2 in [1..p2max]: fig = parametric_plot([cos(a) + cos(p1*a)/2 + sin(p2*a)/3, sin(a) + sin(p1*a)/2 + cos(p2*a)/3], (a, 0, 2 * pi), title = ('(p1, p2) = (' + str(p1) + ', ' + str(p2) + ')'), frame = True, axes_pad = .05) pdf_pages.savefig(fig) pdf_pages.close() ... and it produces this error: File "/usr/lib/sagemath/local/lib/python2.7/site-packages/matplotlib-1.4.3-py2.7-linux-x86_64.egg/matplotlib/backends/backend_pdf.py", line 2438, in savefig raise ValueError("No such figure: " + repr(figure)) ValueError: No such figure: Graphics object consisting of 1 graphics primitive I'm not really sure that what you are saving is a matplotlib figure, though. You would want to convert the Sage graphics objects to .matplotlib()first, I think - and even then I don't know whether Sage is compiled with this backend available, though you should try. Thanks for replying. Yes, it was an admittedly naive attempt to see if the PdfPages import would work as-is, and I didn't really expect it would. Now I'm hunting around to see if there's any other way of achieving the same result. If not, it's easy enough to create a pdf from individual plot images. It may very well work, but again you'd have to convert the Sage figures into matplotlib ones - which is supported.
https://ask.sagemath.org/question/30786/save-plots-into-a-multi-page-pdf/
CC-MAIN-2020-29
refinedweb
319
65.52
Red Hat Bugzilla – Bug 136297 /usr/bin/yum refers to nonexisting dir ; yummain inaccessible Last modified: 2014-01-21 17:50:24 EST From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; rv:1.7.3) Gecko/20041012 Firefox/0.10.1 Description of problem: /usr/bin/yum contains: sys.path.insert(0, '/home/skvidal/cvs/yum-HEAD') as a consequence, yummain.py from /usr/share/yum-cli is not found. yum does not run ; given the importance of yum, this is critical. The concerned line should read: sys.path.insert(0, '/usr/share/yum-cli') Version-Release number of selected component (if applicable): yum-2.1.8-1 How reproducible: Always Steps to Reproduce: 1. Type ``yum'' on the command line ; that's all. 2. 3. Actual Results: [root@tate thome]# yum Traceback (most recent call last): File "/usr/bin/yum", line 6, in ? import yummain ImportError: No module named yummain Expected Results: not this. An error as I haven't given any command, but not this sort of crash Additional info: Here is some additional info from rpm -qi Name : yum Version : 2.1.8 Release : 1 Build Date: Mon 18 Oct 2004 07:47:52 AM CEST Build Host: tweety.build.redhat.com Size : 649456 This is actually a security bug. A user named "skivdal" will be able to do whatever he likes to do when the superuser runs yum. ShXt, once again this bad bugzilla search tool fooled me. This bug has already been reported a number of times in the night (cf 136172). Sorry for that. E. *** This bug has been marked as a duplicate of 136172 *** Changed to 'CLOSED' state since 'RESOLVED' has been deprecated.
https://bugzilla.redhat.com/show_bug.cgi?id=136297
CC-MAIN-2017-39
refinedweb
284
68.77
Debug::Client - debugger client side code for Padre, The Perl IDE. This document describes Debug::Client version: 0.29 use Debug::Client; my $debugger = Debug::Client->new(host => $host, port => $port); Where $host is the host-name to be used by the script under test (SUT) to access the machine where Debug::Client runs. If they are on the same machine this should be localhost. $port can be any port number where the Debug::Client could listen. This is the point where the external SUT needs to be launched by first setting $ENV{PERLDB_OPTS} = "RemotePort=$host:$port" then running perl -d script Once the script under test was launched we can call the following: my $out = $debugger->get; $out = $debugger->step_in; $out = $debugger->step_over; my ($prompt, $module, $file, $row, $content) = $debugger->step_in; my ($module, $file, $row, $content, $return_value) = $debugger->step_out; my $value = $debugger->get_value('$x'); $debugger->run(); # run till end of breakpoint or watch $debugger->run( 42 ); # run till line 42 (c in the debugger) $debugger->run( 'foo' ); # run till beginning of sub $debugger->execute_code( '$answer = 42' ); $debugger->execute_code( '@name = qw(foo bar)' ); my $value = $debugger->get_value('@name'); # $value is the dumped data? $debugger->execute_code( '%phone_book = (foo => 123, bar => 456)' ); my $value = $debugger->get_value('%phone_book'); # $value is the dumped data? $debugger->set_breakpoint( "file", 23 ); # set breakpoint on file, line $debugger->get_stack_trace my $script = 'script_to_debug.pl'; my @args = ('param', 'param'); my $perl = $^X; # the perl might be a different perl my $host = '127.0.0.1'; my $port = 24642; my $pid = fork(); die if not defined $pid; if (not $pid) { local $ENV{PERLDB_OPTS} = "RemotePort=$host:$port" exec("$perl -d $script @args"); } require Debug::Client; my $debugger = Debug::Client->new( host => $host, port => $port, ); $debugger->listener; my $out = $debugger->get; $out = $debugger->step_in; # ... This is a DEVELOPMENT Release only, you have been warned! The primary use of this module is to provide debugger functionality for Padre 0.98 and beyond, This module has been tested against Perl 5.18.0 The constructor can get two parameters: host and port. my $debugger = Debug::Client->new; my $debugger = Debug::Client->new(host => 'remote.host.com', port => 24642); Returns the content of the buffer since the last command $debugger->get_buffer; $debugger->quit(); . (dot) Return the internal debugger pointer to the line last executed, and print out that line. $debugger->show_line(); Return the internal debugger pointer to the line last executed, and generate file-name and row for where are we now. trying to use perl5db line-info in naff way, $debugger->get_lineinfo(); Then use the following as and when. $debugger->get_filename; $debugger->get_row; to get filename and row for ide due to changes in perl5db v1.35 see perl5156delta v [line] View a few lines of code around the current line. $debugger->show_view(); s [expr] Single step. Executes until the beginning of another statement, descending into subroutine calls. If an expression is supplied that includes function calls, it too will be single-stepped. $debugger->step_in(); Expressions not supported. $debugger->step_over(); my ($prompt, $module, $file, $row, $content, $return_value) = $debugger->step_out(); Where $prompt is just a number, probably useless $return_value will be undef if the function was called in VOID context It will hold a scalar value if called in SCALAR context It will hold a reference to an array if called in LIST context. TODO: check what happens when the return value is a reference to a complex data structure or when some of the elements of the returned array are themselves references Sends the stack trace command T to the remote debugger and returns it as a string if called in scalar context. Returns the prompt number and the stack trace string when called in array context. Sends the stack trace command t Toggle trace mode. $debugger->toggle_trace(); Sends the stack trace command S [[!]pattern] List subroutine names [not] matching pattern. $debugger->run; Will run till the next breakpoint or watch or the end of the script. (Like pressing c in the debugger). $debugger->run($param) $debugger->set_breakpoint($file, $line, $condition); $condition is not currently used $debugger->remove_breakpoint( $self, $file, $line ); The data as (L) prints in the command line debugger. $debugger->show_breakpoints(); my $value = $debugger->get_value($x); If $x is a scalar value, $value will contain that value. If it is a reference to a ARRAY or HASH then $value should be the value of that reference? p expr. From perldebug, but defaulted to y 0 $debugger->get_p_exp(); From perldebug, but defaulted to y 0 y [level [vars]] Display all (or some) lexical variables (mnemonic: my variables) in the current scope or level scopes higher. You can limit the variables that you see with vars which works exactly as it does for the V and X commands. Requires that the PadWalker module be installed Output is pretty-printed in the same style as for V and the format is controlled by the same options. $debugger->get_y_zero(); which is now y=1 since perl 5.17.6,: $debugger->get_v_vars(regex); X [vars] Same as V currentpackage [vars] $debugger->get_x_vars(regex); Enter h or `h h' for help, For more help, type h cmd_letter, optional var $debugger->get_h_var(); o booloption ... Set each listed Boolean option to the value 1 . o anyoption? ... Print out the value of one or more options. o option=value ... Set the value of one or more options. If the value has internal white-space,. $debugger->set_option(); o Display all options. $debugger->get_options(); Actually I think this is an internal method.... In SCALAR context will return all the buffer collected since the last command. In LIST context will return ($prompt, $module, $file, $row, $content) Where $prompt is the what the standard debugger uses for prompt. Probably not too interesting. $file and $row describe the location of the next instructions. $content is the actual line - this is probably not too interesting as it is in the editor. $module is just the name of the module in which the current execution is. $debugger->get_filename(); $debugger->get_row(); $debugger->module(); If you get any issues installing, try install Term::ReadLine::Gnu first. Warning if you use List request you may get spurious results. When using against perl5db.pl v1.35 list mode gives an undef response, also leading single quote now correct. Tests are skipped for list mode against v1.35 now. Debug::Client 0.12 tests are failing, due to changes in perl debugger, when using perl5db.pl v1.34 Debug::Client 0.13_01 skips added to failing tests. c [line|sub] Continue, optionally inserting a one-time-only breakpoint at the specified line or subroutine. c is now ignoring options [line|sub] and just performing c on it's own Warning sub listen has bean deprecated Has bean deprecated since 0.13_04 and all future version starting with v0.14 Perl::Critic Error Subroutine name is a homonym for built-in function Use $debugger->listener instead It will work against perl 5.17.6-7 with rindolf patch 7a0fe8d applied for watches Kevin Dawson <bowtie@cpan.org> Gabor Szabo <gabor@szabgab.com> Breno G. de Oliveira <garu at cpan.org> Ahmad M. Zawawi <ahmad.zawawi@gmail.com> Mark Gardner <mjgardner@cpan.org> Wolfram Humann <whumann@cpan.org> Adam Kennedy <adamk@cpan.org> Alexandr Ciornii <alexchorny@gmail.com> Some parts Copyright © 2011-2013 Kevin Dawson and CONTRIBUTORS as listed above. This program is free software; you can redistribute it and/or modify it under the same terms as Perl 5 itself. There is no warranty whatsoever. If you lose data or your hair because of this program, that's your problem. Originally started out from the remote-port.pl script from Pro Perl Debugging written by Richard Foley. GRID::Machine::remotedebugtut
http://search.cpan.org/~bowtie/Debug-Client-0.29/lib/Debug/Client.pm
CC-MAIN-2015-35
refinedweb
1,275
57.16
New in version 2.1.8October 14th, 2014 - New Features: - Adding the new "hola" API - hamsterdb analytical functions for COUNT, SUM, AVERAGE etc. See ham/hamsterdb_ola.h for the declarations - Added new API ham_cursor_get_duplicate_position - A new Python API was added - Bugfixes: - issue #33: upgraded to libuv 0.11.22 - Fixing a performance regression in 2.1.7 - large fixed-length keys created too many page splits, even if they were stored as extended keys - Other Changes: - The database format no longer tries to be endian agnostic; the database is now stored in host endian format. The endian agnostic code was broken anyway, and I had no hardware to test it. - ham_db_get_error is now deprecated - header files no longer include winsock.h to avoid conflicts with winsock2.h on Windows platforms - Both btree layouts have been completely rewritten; PAX KeyLists can now be used in combination with duplicate RecordLists, and variable length KeyLists can now be used in combination with PAX RecordLists - Avoiding Btree splits if keys are appended (HAM_HINT_APPEND) - The internal communication with the remote server now uses a different protocol which is faster than google's protobuffer - PAX layout now uses linear search for small ranges; this improves search performance by 5-10% - Removed the ham_get_license API (and serial.h) New in version 2.1.5 (February 14th, 2014) - This release fixes several bugs and improves performance. Also, hamsterdb now scales much better if the file size grows beyond several gigabytes. New in version 2.1.4 (January 10th, 2014) - This release adds custom Btree layouts for variable length keys and duplicate keys. Also, small records are now stored directly in the Btree leaf node, instead of an external blob. New in version 2.0.5 (December 3rd, 2012) - This version fixes a few minor bugs, has a few performance improvements, and fixes a segmentation fault in the .NET API. - The internal C++ implementation has been moved into namespace “ham” to avoid conflicts with other symbols. - Please check the README for upcoming API changes in the next release. New in version 2.0.3 (June 26th, 2012) - This version fixes several bugs and adds support for Microsoft's Visual Studio 2010. - The legacy file format of hamsterdb 1.0.9 and older is no longer supported. - Sources and precompiled libraries for Win32 (x86 and x64) are available for download. New in version 2.0.2 (April 28th, 2012) - This version makes hamsterdb thread-safe. - A bug in the freelist was fixed. - Boost is now required. - Sources and pre-compiled win32/win64 libraries are available for download. New in version 2.0.1 (February 20th, 2012) - This version adds a few minor features like setting a custom path for log files and re-enabling approximate matching for use with Transactions. - A few bugs were fixed as well. - Sources and precompiled Win32/Win64 libraries are available for download. New in version 2.0.0 (January 23rd, 2012) - It features a complete re-implementation of the Transaction support, now allowing an unlimited number of Transactions in parallel. - It integrates the Java and .NET APIs. - Sources, documentation, and prebuilt libraries for Win32 (including .NET and Java) are available on the (redesigned) webpage. New in version 2.0.0 RC3 (November 30th, 2011) - This version further stabilizes the 2.x branch and fixes all known issues from the previous rc2 release. - Performance was improved in many areas. - Sources and precompiled Win32 libraries are available for download on the Web page.
http://linux.softpedia.com/progChangelog/hamsterdb-Changelog-17717.html
CC-MAIN-2015-18
refinedweb
576
67.25
#include <vtkJPEGReader.h> read JPEG files vtkJPEGReader is a source object that reads JPEG files. The reader can also read an image from a memory buffer, see vtkImageReader2::MemoryBuffer. It should be able to read most any JPEG file. Definition at line 34 of file vtkJPEGReader.h. Definition at line 38 of file vtkJPEGReader.h. Definition at line 59 of file vtkJPEGReader.h. Definition at line 60 of file vtkJPEG2. Methods invoked by print to print information about the object including superclasses. Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes. Reimplemented from vtkImageReader2. Is the given file a JPEG file? Get the file extensions for this format. Returns a string with a space separated list of extensions in the format .extension Reimplemented from vtkImageReader2. Definition at line 51 of file vtkJPEGReader.h. Return a descriptive name for the file format that might be useful in a GUI. Reimplemented from vtkImageReader2. Definition at line 56 of file vtkJPEGReader.h. This is a convenience method that is implemented in many subclasses instead of RequestData. It is called by RequestData. Reimplemented from vtkImageReader2.
https://vtk.org/doc/nightly/html/classvtkJPEGReader.html
CC-MAIN-2019-51
refinedweb
195
62.34
Define two pointers slow and fast. Both start at head node, fast is twice as fast as slow. If it reaches the end it means there is no cycle, otherwise eventually it will eventually catch up to slow pointer somewhere in the cycle. Let the distance from the first node to the the node where cycle begins be A, and let say the slow pointer travels travels A+B. The fast pointer must travel 2A+2B to catch up. The cycle size is N. Full cycle is also how much more fast pointer has traveled than slow pointer at meeting point. A+B+N = 2A+2B N=A+B From our calculation slow pointer traveled exactly full cycle when it meets fast pointer, and since originally it travled A before starting on a cycle, it must travel A to reach the point where cycle begins! We can start another slow pointer at head node, and move both pointers until they meet at the beginning of a cycle. public class Solution { public ListNode detectCycle(ListNode head) { ListNode slow = head; ListNode fast = head; while (fast!=null && fast.next!=null){ fast = fast.next.next; slow = slow.next; if (fast == slow){ ListNode slow2 = head; while (slow2 != slow){ slow = slow.next; slow2 = slow2.next; } return slow; } } return null; } } since slow is already in the loop, so slow can only move inside the loop. While slow2 just moves from the beginning. Thus they will meet exactly at the beginning of cycle. I can't understand the formula very well A+B+N = 2A+2B N=A+B Think about the LikedList: 1->2->3->4->5->6->7->8->9->10-> the cycle begin 3. The slow pointer and fast pointer will meet at 9, N = A + B Here A = 2 (1->3), N = 8 (from 3->10 + 10->3), B = N-A = 6, Meeting point is A+B = 2+6=8 (8th node from start which is 9) If you meet at node 7, how can the slow meet the slow2 at node 3? from node 1 to 3 is two steps, from node 7-3 is four steps Is this diagram help you understand? When fast and slow meet at point p, the length they have run are 'a+2b+c' and 'a+b'. Since the fast is 2 times faster than the slow. So a+2b+c == 2(a+b), then we get 'a==c'. So when another slow2 pointer run from head to 'q', at the same time, previous slow pointer will run from 'p' to 'q', so they meet at the pointer 'q' together. ListNode fast = head, slow = head; while(fast != null && fast.next != null){ fast = fast.next.next; slow = slow.next; if (fast == slow){ ListNode slow2 = head; while (slow != slow2){ slow2 = slow2.next; slow = slow.next; } return slow; } } return null; Since there is a cycle, when slow1 moves, it will loop from p to q while the slow2 moves from head to the q. And from the proof, we know that a==c, that means, the slow1 and slow2 will meet at the point which the cycle starts. This is actually called the Tortoise and Hare algorithm (in reference to Aesop's fable) and is attributed to Floyd, the same Floyd as the Floyd–Warshall algorithm, in case you were interested in more research. @lwen8989gmail.com The picture really helps! thanks a lot! Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/19367/java-o-1-space-solution-with-detailed-explanation/13
CC-MAIN-2017-51
refinedweb
579
82.54
20 May 2010 13:06 [Source: ICIS news] PRAGUE (ICIS news)--Slovnaft has cut its first quarter net loss to €4m ($4.94m) compared with the €11m loss it witnessed in the same period last year, due to improved petrochemical sales revenues, the Slovakian refining and petrochemical company said on Thursday. The group’s petrochemical revenues climbed 54% to €125m, while overall company revenues increased 37% to €719m, Slovnaft added. “The last month of the quarter showed a positive trend change signalling the influence of the global economic recovery,” said Slovnaft CEO Oszkar Vilagi, referring to both retail fuel and petrochemical prospects. “Although exports recorded lower demand in the early months on the Czech, Hungarian and German markets, in the coming months we expect growth in sales here,” Vilagi added. Slovnaft said that its first quarter polymer sales volumes decreased year on year by 0.5% to 98,700 tonnes, while its copolymer sales for the first quarter of 2010 increased by 55.5% compared with the same period last year. Export sales volumes in the first quarter of 2010 fell by 1% year on year, while domestic sales rose by 2% compared to the same period a year before, the group added. The company is a subsidiary of ?xml:namespace> (
http://www.icis.com/Articles/2010/05/20/9361327/slovnaft-cuts-its-first-quarter-net-loss-to-4m.html
CC-MAIN-2015-14
refinedweb
211
59.33
Red Hat Bugzilla – Bug 152680 libxml2: an overflow when parsing remote resources. Last modified: 2014-01-21 17:51:39 EST. ------- Additional Comments From michal@harddata.com 2004-02-26 16:49:22 ---- Created an attachment (id=560) A proposed patch for bug #1324 adapted to sources from RH 7.3 distribution Files in question do not differ very much across lib versions and that patch will likely apply everywhere with slight offsets. ------- Additional Comments From dom@earth.li 2004-02-27 07:44:22 ---- Patch applies cleanly to 7.3 and 8.0 - SRPMS at: redhat 7.3: redhat 8.0: (version numbers should be correct this time!) The patch doesn't apply cleanly to the 7.2 package (libxml2-2.4.10-0.7x.2.src.rpm) so no 7.2 fix for now. ------- Additional Comments From arvand@sabetian.net 2004-03-03 05:09:36 ---- This is from RedHat: "[Updated 3 March 2004] Revised libxml2 packages are now available as the original packages did not contain a complete patch." References: So I guess we are back to square one? I guess we werent too far from it anyway. ------- Additional Comments From michal@harddata.com 2004-03-03 13:09:19 ---- Created an attachment (id=566) "forgotten buffer overflow" patch to libxml2 > So I guess we are back to square one? Not really. One more small patch is required. Applies on the top of the previous one. It seems to be all for the moment; but if somebody else eyes that one that would be even better. I am running right now with re-patched libxml2. ------- Additional Comments From michal@harddata.com 2004-03-03 13:37:10 ---- Created an attachment (id=567) A fix of known issues for libxml2-2.4.10-0.7x.2, i.e. 7.2 package This "carries over" to RH 7.2 all current fixes applied to sources used in RH 7.3 and above. I do not have RH 7.2 system running so somebody with such would have to test it. OTOH you can use this 7.3 rpms of this library on a 7.2 system and this should not cause any problems. ------- Additional Comments From bugs.michael@gmx.net 2004-03-23 04:36:49 ---- What's the status here? Package from comment 2 is good except it's missing the patch from comment 4 and "Buildrequires: libxslt-devel zlib-devel" should be added. ------- Additional Comments From skvidal@phy.duke.edu 2004-04-30 20:46:04 ---- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 libxml2 packages - updated with all comments/patches from this bug e9603148f15b064d839b2feabc2e7e8b libxml2-2.4.19-5.legacy.i386.rpm c28a5f127bbd811c5c57e25ea3a65809 libxml2-2.4.19-5.legacy.src.rpm 31ed8e2030680461382e26b6d5348e30 libxml2-devel-2.4.19-5.legacy.i386.rpm ba441fdad05d94f094b78ced37c26226 libxml2-python-2.4.19-5.legacy.i386.rpm please QA -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFAk0ed1Aj3x2mIbMcRAjgJAJ9za1zAL/X+kYzkZgSxIEppcAPQ0wCfcCSS EtmMEaGUoV7eZjn/vaUYRq8= =HVdw -----END PGP SIGNATURE----- ------- Additional Comments From bugs.michael@gmx.net 2004-05-01 05:20:03 ---- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 SHA1 9d5bfcd9ac771ebcb92bbd9d6cba44c9ef2fa2f7 libxml2-2.4.19-5.legacy.src.rpm MD5 c28a5f127bbd811c5c57e25ea3a65809 libxml2-2.4.19-5.legacy.src.rpm * src.rpm is not signed * sources have not changed * bounds checking patch makes sense and doesn't need any testing * builds and upgrades fine on rh73 ++PUBLISH -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.3 (GNU/Linux) iD8DBQFAk7/+0iMVcrivHFQRAiXDAJ9rD/o/qPUVIt9FLhia3hVGchwowQCfS6LK 0ZZMRbFqQy+MIwWLqBdmT38= =EOlY -----END PGP SIGNATURE----- ------- Additional Comments From jkeating@j2solutions.net 2004-05-19 17:02:09 ---- Packages for 7.2 and 8.0. Need quick QA on these before pushing to updates-testing. e8f50a68edd61cc2e6085390a1afcd6578f9edb6 7.2/libxml2-2.4.10-0.7x.3.legacy.i386.rpm a98bc1ffc46a1ebd9bf67211af79212f5af9a816 7.2/libxml2-2.4.10-0.7x.3.legacy.src.rpm d7c85626da4d2f9496eabf491e45f62256425ec6 7.2/libxml2-devel-2.4.10-0.7x.3.legacy.i386.rpm 47e4f3733caca1a34e55e1e4ae79cc3a7f187485 7.2/sha1sums b5ff99b1ce68cb22c767359296b51d513650a2e9 8.0/libxml2-2.4.23-2.legacy.i386.rpm 110f8321c8cc492a22cde952f56511f6db9bda34 8.0/libxml2-2.4.23-2.legacy.src.rpm de270d96f7bad00debb59211447f89084bc5bf83 8.0/libxml2-devel-2.4.23-2.legacy.i386.rpm 606fa877723e1a0dc6cb759389bd65554b17d9d2 8.0/libxml2-python-2.4.23-2.legacy.i386.rpm cdda589eb44bce8c2e885418fc6641116d540472 8.0/sha1sums ------- Additional Comments From jkeating@j2solutions.net 2004-05-31 12:06:57 ---- Pushed to updates-testing. 7ea6c8e40a04c2eafb82d53e8e6931b27348f4ad 7.3/updates-testing/SRPMS/libxml2-2.4.19-5.legacy.src.rpm c325b2b9d03335b41db6b0b462a35d1ed847e56f 7.3/updates-testing/i386/libxml2-2.4.19-5.legacy.i386.rpm c53f70cad435630b3e5b5f5d363c7d425f980a35 7.3/updates-testing/i386/libxml2-devel-2.4.19-5.legacy.i386.rpm 8819fa789731693645839f32f55aac2f2dc27906 7.3/updates-testing/i386/libxml2-python-2.4.19-5.legacy.i386.rpm ------- Additional Comments From jpdalbec@ysu.edu 2004-06-25 07:31:30 ---- How does one verify libxml2 in production? I see that nautilus has a dependency on it. Is it enough to open the "start here" link and try a few of the control panels? ------- Additional Comments From michal@harddata.com 2004-06-25 07:39:46 ---- I can only say that I have a patched libxml2 in use on seven machines with various installations starting from March 3rd and I would still have to see problems. I am not sure how likely is a remote attack against, say, httpd using this flaw but it definitely opens possibilities. ------- Additional Comments From jpdalbec@ysu.edu 2004-06-30 03:26:22 ---- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ++VERIFY RH 7.3 cc4f1a9f0163fe10fd36524758aacf34f2bb75ee libpng-1.0.14-0.7x.4.i386.rpm bca918186e519dbb73362f08453410a981770645 libpng-devel-1.0.14-0.7x.4.i386.rpm 5f39b22b6dbcd66c777289fde7777631c1b8146e libxml2-2.4.2-1.i386.rpm 53cbe1a4c519cfb222b4a7527d948d52ac60e1d4 libxml2-devel-2.4.2-1.i386.rpm I installed these packages on a production web server and restarted apache. I haven't seen any problems. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQFA4rrCJL4A+ldA7asRApZiAKCmXkwMXY2SztTjKbdoLcgvOdLA4wCfdOjr DBm/gi1Dqs3HJkaqWu6rOQA= =DL/l -----END PGP SIGNATURE----- ------- Additional Comments From kev@sowood.co.uk 2004-07-22 06:36:24 ---- The rebuild of this package appears to have removed support for python2.2. Looking at the files list for the previous package version (libxml2-python-2.4.19-4) and the new version (libxml2-python-2.4.19-5), the files for python2.2 have been removed. Given how the specfile operates, the problem presumably arose as the rpm was rebuilt on a system with only python1.5 installed. The specfile automatically detects and builds for the versions of python on the system. [kev@coll kev]$ rpm -ql libxml2-python-2.4.19-5.legacy /usr/lib/python1.5/site-packages/libxml2.py /usr/lib/python1.5/site-packages/libxml2mod.so But nothing for python2.2 This is currently breaking yum with the following error: [kev@coll classes]$ yum Traceback (most recent call last): File "/usr/bin/yum", line 22, in ? import yummain File "yummain.py", line 30, in ? File "yumcomps.py", line 4, in ? File "comps.py", line 5, in ? File "/usr/lib/python2.2/site-packages/libxml2.py", line 1, in ? ImportError: No module named libxml2mod ------- Additional Comments From jpdalbec@ysu.edu 2004-07-28 04:46:12 ---- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 New RHL 7.3 libxml2 packages are available. sha1sums: d5a4b8060f08ffbbd259bc5e31ea5bbf9047ad7e 609f9276b2a096aced8fc15ddab896de0cd68056 c3b195b77ca1f57d2b9e2af75d954a099cd3a23b 668c6759e48173bce19be87b57797174039e78d3 changelog: * Wed Jul 28 2004 John Dalbec <jpdalbec@ysu.edu> 2.4.19-6.legacy - - added buildrequires: python2 python2-devel per comment 14 Installed on workstation. Yum still works. libxml2-python filelist shows modules for 1.5 and 2.2. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQFBB7qpJL4A+ldA7asRAobKAJ910mSxhh83BIPA/sOZ0R81eU2QgQCgg1Rv B5KQ+YI/hEbmD2XE8VNYmhE= =6zsX -----END PGP SIGNATURE----- ------- Additional Comments From dom@earth.li 2004-07-28 04:51:30 ---- Such a package is already sitting in updates-testing: ------- Additional Comments From jpdalbec@ysu.edu 2004-07-30 10:57:39 ---- I guess I missed that. Should this be changed to RESOLVED PENDING then? ------- Additional Comments From bugs.michael@gmx.net 2004-07-30 11:55:53 ---- I see libxml2-2.4.19-5 in "updates", the same version that was pushed to updates-testing on 2004-05-31. This bug was closed on 2004-07-19: How the 2004-07-22 build of libxml2-2.4.19-6 was pushed to updates-testing cannot be seen from the activity log. ------- Additional Comments From jpdalbec@ysu.edu 2004-08-16 09:47:22 ---- Can someone add the 2.4.19-6.legacy packages to the yum header.info file for updates-testing? Otherwise yum users won't see them. ------- Additional Comments From marcdeslauriers@videotron.ca 2004-09-12 08:04:55 ---- libxml2-python-2.4.19-5.legacy should be removed ASAP from the updates directory as I think it breaks yum. ------- Additional Comments From bugs.michael@gmx.net 2004-09-12 08:29:15 ---- The '5' release is missing "Buildrequires: python2-devel". The binary builds did not miss anything, e.g. look at: The verification step was kind of skipped. Usually, differences between last good release and a new update are caught prior to the VERIFIED step in bugzilla. For that it's important to get official builds as soon as possible, because QA on the binary builds is even more important for legacy updates than fixing src.rpm bugs and waiting weeks for the binary builds. ------- Additional Comments From marcdeslauriers@videotron.ca 2004-09-30 16:25:13 ---- the 2.4.19-6.legacy packages in updates-testing should be released as they're identical to the 2.4.19-5 packages in the updates directory except for the missing BuildRequires... ------- Bug moved to this database by dkl@redhat.com 2005-03-30 18:23 ------- This bug previously known as bug 1324 at Originally filed under the Fedora Legacy product and Package request component. Attachments: A proposed patch for bug #1324 adapted to sources from RH 7.3 distribution "forgotten buffer overflow" patch to libxml2 A fix of known issues for libxml2-2.4.10-0.7x.2, i.e. 7.2 package.
https://bugzilla.redhat.com/show_bug.cgi?id=152680
CC-MAIN-2017-51
refinedweb
1,661
54.69
Here is the code - using UnityEngine; using System.Collections; public class MoveEnemy : MonoBehaviour { private GameObject Road1; public Sprite NE; [HideInInspector] public GameObject[] waypoints; private int currentWaypoint = 0; private float lastWaypointSwitchTime; public float speed = 1.0f; public string waypoint; void Start () { lastWaypointSwitchTime = Time.time; Road1 = GameObject.Find("Road_1"); } //~~~~~ some code here which isnt relevant. Update calls the method //RotateIntoMoveDirection. ~~~~~~~~ private void RotateIntoMoveDirection() { waypoint = currentWaypoint.ToString(); Debug.Log(waypoint); Road1.Find("waypoint"); } } This code calls out the error - ''Assets/Main Project/Scripts/MoveEnemy.cs(67,15): error CS0176: Static member `UnityEngine.GameObject.Find(string)' cannot be accessed with an instance reference, qualify it with a type name instead'' My search has found out that i have to use GameObject instead of gameObject, but i AM using it! I am trying to find which waypoint is currently passed. I get the number of it and convert it into string and try to find the child with the name same as the index of the waypoint. If waypoint is 1 then i find child with index of 1 in the GameObject. Then i will find get its name and change other gameobjects sprite to its name. If it's taxi_NE i will change it to sprite taxi_NE and so on Answer by dan5071 · Feb 14, 2016 at 12:39 AM I'm afraid GameObject.Find doesn't quite work like that. I'd imagine the line with Road1.Find( "waypoint" ); is line 67 in your code as specified in your error message. What you're doing here is looking for a method in a local instance of the GameObject class, identified as a gameObject when you write code (note the difference in the uppercase and lowercase G's). Using the GameObject class is kind of like having a mold that you use to make gameObjects. These are the objects that actually appear in your game. The GameObject is an abstract concept where a gameObject is a concrete instance. The latter inherits the general features of a GameObject such as having a Transform component. However, some methods such as Find cannot be accessed through a gameObject that you made from the GameObject "mold." This is why you are able to say Road1.SetActive(), but not Road1.Find(). Some features are inherited by the gameObject, but others are not. Instead, you need to use the class GameObject. Change that line to : GameObject.Find( "waypoint" ); and it should work as long as you don't have multiple game objects in your scene labelled "waypoint." It will only work in this case because GameObject.Find will search the entire scene for objects with that name. However, if you would need to specifically access objects that are children of Road1, you could use: Road1.transform.Find( "waypoint" ); Since children are actually controlled by the transform component of a gameObject. That may have been a long explanation but I hope it helps! Oh well, thanks anyways, but this didn't help like i hoped to. I fixed my problem this way public Sprite NW; public Sprite NE; public Sprite SW; public Sprite SE; public GameObject SpriteObject; private SpriteRenderer spriterenderer; void Start () { spriterenderer = SpriteObject.GetComponent<SpriteRenderer>(); } private void RotateIntoMoveDirection() { switch (currentWaypoint) { case 1: spriterenderer.sprite = NE; break; case 2: spriterenderer.sprite = NW; break; // case 3 and so on } } } Originally i wanted to find the INDEX of the child, but it seems like this is just too much of a hassle i have created for myself as always. Big thanks for teaching me this much! Sorry for asking for more but there is one more question i have here which is a wierd bug/interaction i have in my lerp. Care on trying to answer that? @dan5071 your answer is perfect, but there is a little problem.. transform.Find only finds immediate children. What should I use in order to access a child that is deeper inside the parent and not just first tier. (Solved) Findout if an object exist in the scene, but dont trow exception 1 Answer Talking to GameObject and components problem 1 Answer Way of Finding/accessing a variable from another script though another variable? 1 Answer How to find GameObject in script? 1 Answer What to use instead of GameObject.Find and GetComponent 2 Answers
https://answers.unity.com/questions/1141454/gameobjectfind-throws-be-a-error-when-trying-to-fi.html
CC-MAIN-2019-35
refinedweb
708
66.44
I’m writing a Gutenberg block for TypeIt that’ll allow content creators to easily drop typewriter effects into WordPress. The Gutenberg infrastructure is heavily rooted in the React ecosystem, so building a block feels very much like building a React application. One piece of this ecosystem that’s new to me, however, is Redux, and soon after I dove into it, I ran into a problem that had my head tilting for quite some time – enough time to warrant writing it down in case I ever need to explain it to myself again. The Problem The short version is that I seemingly couldn’t update local state inside a Redux store listener housed within that component. And in my particular setup, an infinite loop was resulting. I was dumbfounded. The Context In my code, I have a global Redux store that’s responsible for holding the base options for each TypeIt block on a page (why they’re managed separately like this is another conversation). Whenever a block is saved, I want to pull down that block’s options from the shared store and save them with the block itself, rather than storing them somewhere else altogether. Here is my professional artistic attempt to illustrate this arrangement: I attempted to solve this by updating local block state whenever my global store changed. To pull it off, in my block’s component, I used Redux’s subscribe method to listen for any global store changes. When they occurred, I checked if the options for my specific block have changed, and if they did, I updated my block’s attributes (the prop used in a Gutenberg block to save and manage block data). That looked something like this (a bit stripped down for brevity): const { useEffect } = wp.element; const { subscribe } = wp.data; registerBlockType('wp-typeit/block', { // ... edit: ({ attributes, setAttributes }) => { // ... useEffect(() => { subscribe(() => { let baseSettings = wp.data.select('wp-typeit/store').getSettings()[instanceId] if (JSON.stringify(baseSettings) !== JSON.stringify(attributes.settings)) { setAttributes({ settings: baseSettings }); } } }, []); // <-- Only set up listener on `mount`. } } This looked pretty safe. But when a global store change occurred, an infinite loop was set off within the component. I soon realized that the setAttributes method provided by Gutenberg triggered another store change (I don’t yet know why). Unexpected, but it still shouldn’t be a problem. After all, the next time the listener fires, my global settings should exactly match my local attributes, preventing the setAttributes method from being called again. But that was apparently incorrect. As it turned out, within that subscribe listener, my local state wasnt’t getting updated at all. And so every time the listener fired, that equality check would fail every time, over and over again. Infinite loop. Remember, This Is React It took a bit, but the solution to this problem arose after remembering how React handles updates to its state. Every time a component’s state (including props) is changed, that component is re-rendered, and it’s only after that rerender when the updated state (including props) is available. But my subscribe listener wasn’t respecting that. It was being activated once after the component mounted, and so it was only aware of the version of the props it had at that specific time. I could call setAttributes all I wanted, but that specific listener instance would behave as if nothing happened at all. useEffect(() => { subscribe(() => { // Listener is created ONCE, and never aware of future state updates. } }, []); The Solution: Clean Up Store Listeners In order to perform future store comparisons after my local state was updated, I needed to throw away my subscribe listener every time a local state change occurred. With my specific circumstances, that meant a few tweaks: - Extract the unsubscribemethod returned when a subscribe listener is created. - Unsubscribe immediately before the setAttributesmethod fires. Since setAttributestriggers a global store change, this unplugs the listener to prevent it from firing before the local state is technically updated. - Instead of setting up a single listener on mount****, do so every time the block is updated. To avoid listeners becoming stacked upon listeners, I’m using the cleanup mechanism built into the useEffecthook by returning from the hook with an unsubscribe()method call. Even though I’m already unsubscribing every time I call setAttributes, this will cover my butt any time a different state change occurs, totally unrelated to these settings. The objective is to never have more than one store listener active in the component at once, and this helps guarantee that. In all, those changes look like this: const { useEffect } = wp.element; const { subscribe } = wp.data; registerBlockType('wp-typeit/block', { // ... edit: ({ attributes, setAttributes}) => { useEffect(() => { // ... - subscribe(() => { + const unsubscribe = subscribe(() => { let baseSettings = wp.data.select('wp-typeit/store').getSettings()[instanceId] if (JSON.stringify(baseSettings) !== JSON.stringify(attributes.settings)) { + unsubscribe(); setAttributes({ settings: baseSettings }); } } + return () => { + unsubscribe(); <!-- Destroy listener after every state change. + } - }, []); + }); // <!-- Activate new listener after every state change. } } Takeaway: Understand the React Lifecycle While this particular problem is highly specific to WordPress/Gutenberg, it all illustrates how important it is to have a solid understanding of the React lifecycle and the gotchas that it makes possible by nature. In fact, it’s probably a good practice to start troubleshooting bugs like this by rubber ducking the events leading up to and following the undesired behavior that’s occurring. If it’s anything like the challege I’ve shared here, you’ll walk away with a better understanding of how React fundamentally works, as well as confirmation that you’re not actually going insane.
https://macarthur.me/posts/clean-up-your-redux-store-listeners-when-component-state-updates
CC-MAIN-2022-40
refinedweb
919
54.02
Django REST Framework & Channels We’ve begun exploring some patterns for how to add WebSocket push notifications to what is otherwise a RESTful API. This means, for us, using Django REST Framework and Django Channels in concert. A ways back, Tom Christie, the creator of Django REST Framework (DRF), said: I think the biggest thing missing for folks ATM is probably just a lack of tutorials or blog posts on using Channels and REST framework together. I realized that’s exactly what we’re working on lately at OddBird, so I thought I’d write up an in-progress report. I’m not at all convinced that this is the best way, but it’s what seems to be working so far. The BasicsCopy permalink to “The Basics” First, DRF. I’ll just touch briefly on this, assuming that you’re already familiar with it. If you’re not, their own documentation is better than any summary I can offer here. We’ve made a pretty traditional RESTful API with it, keeping the endpoints flat, with minimal server-side logic mostly encapsulated in the serializers. Just so we’re on the same page, the endpoints look a bit like this: GET /api/foo POST /api/foo GET /api/foo/:id PUT /api/foo/:id DELETE /api/foo/:id And the code is organized like this: foo/ __init__.py models.py serializers.py urls.py views.py tests/ __init__.py models.py serializers.py views.py (For something that I’m not distributing as a library, I like to keep the tests parallel to the code and within the same tree; I find it makes it easier to work with the tests that pertain to the code you’re touching as you work on it. If I’m writing a library, I root the tests in a different tree, but still with parallel structure to the code; this makes it easier to exclude them on an install.) Inside those files, we mostly have simple declarative use of DRF. Follow their tutorial if you want to get that set up. We use pytest for all our Python tests, and a require 100% test coverage. Because of this, we can’t just skip anything that’s “too hard” to test, so I will talk a bit about our testing setup later. Now Channels. I don’t encounter people as familiar with it as I do with DRF, so I’ll walk through how we set it up a little more. It’s not too bad, but it is different than you may be used to from basic Django. First, you need to move from using WSGI to ASGI, which is “basically WSGI, but async”. This means changing your server process from gunicorn (or whatever you use) to something like Daphne, changing your project.wsgi module to project.asgi (as described in the Channels docs), adding a routing module and a consumers module, and adjusting your settings appropriately. At this stage, you won’t yet have anything in consumers, nor much in routing. routing can look like this: from channels.routing import ProtocolTypeRouter application = ProtocolTypeRouter({ # (http->django views is added by default) }) Yep, that’s basically an empty ProtocolTypeRouter. We’re just first making sure we don’t break anything with the transition to ASGI, and that ProtocolTypeRouter correctly wires HTTP to Django. Once that’s all done and you’ve confirmed that everything’s still working, you can start to add in the wiring for WebSockets. WebSockets, Consumers, and GroupsCopy permalink to “WebSockets, Consumers, and Groups” Let’s talk a bit about architecture before we dive into implementation. As we’re using it, Channels primarily drives WebSockets to push notifications to the client. We’ve opted to simplify the client’s job by having one endpoint that it can call to subscribe to any object it wants, using the payload it sends to validate and set up that subscription. So the client sends the following data to wss://server.domain/ws/notifications/: { "model": "app.label", "id": "123ABC" } The model is something like foo.Foo, using the syntax apps.get_model expects. The id is the HashID of the model instance in question. (We use HashIDs everywhere we can, to avoid leaking information through consecutive ID numbers.) The server will then decide if the requesting user can subscribe to that model, and start sending them updates over that WebSocket if so. On the server’s side of things, we have a Consumer object that handles a bunch of WebSocket events, and, when appropriate, adds a particular socket connection to a named Group. Elsewhere in the server logic, we send events to that Group when the model changes, and all subscribed sockets will receive a serialization of the model with the changes. (Since we’re using React on the front-end for this project, we’re also sending a value that happens to map to the Redux event names we’re using, but that sort of tight coupling may not match your needs.) OK, but what does that Consumer look like? from channels.generic.websocket import AsyncJsonWebsocketConsumer class NotificationConsumer(AsyncJsonWebsocketConsumer): async def connect(self): # We're always going to accept the connection, though we may # close it later based on other factors. await self.accept() async def notify(self, event): """ This handles calls elsewhere in this codebase that look like: channel_layer.group_send(group_name, { 'type': 'notify', # This routes it to this handler. 'content': json_message, }) Don't try to directly use send_json or anything; this decoupling will help you as things grow. """ await self.send_json(event["content"]) async def receive_json(self, content, **kwargs): """ This handles data sent over the wire from the client. We need to validate that the received data is of the correct form. You can do this with a simple DRF serializer. We then need to use that validated data to confirm that the requesting user (available in self.scope["user"] because of the use of channels.auth.AuthMiddlewareStack in routing) is allowed to subscribe to the requested object. """ serializer = self.get_serializer(data=content) if not serializer.is_valid(): return # Define this method on your serializer: group_name = serializer.get_group_name() # The AsyncJsonWebsocketConsumer parent class has a # self.groups list already. It uses it in cleanup. self.groups.append(group_name) # This actually subscribes the requesting socket to the # named group: await self.channel_layer.group_add( group_name, self.channel_name, ) def get_serializer(self, *, data): # ... omitted for brevity. See # And now you’ll want to add some stuff to your routing module, too: from django.urls import path from channels.auth import AuthMiddlewareStack from channels.routing import ProtocolTypeRouter, URLRouter from .consumers import NotificationConsumer websockets = URLRouter([ path( "ws/notifications/", NotificationConsumer, name="ws_notifications", ), ]) application = ProtocolTypeRouter({ # (http->django views is added by default) "websocket": AuthMiddlewareStack(websockets), }) There are a couple more pieces. We need to actually send updates when a model changes! We separate out those concerns. We add a notifications module with the appropriate functions to wrap up the data and send it over the channels layer, and then we call out to those functions in the models’ save methods. First, the notifications module: we define an async function that will build and send an appropriately-shaped object to the appropriate group on the channel layer. This is part of our API, and the output of all the helper functions here should be documented for anyone who consumes this API. from channels.layers import get_channel_layer from .serializers import FooSerializer async def update_foo(foo): serializer = FooSerializer(foo) group_name = serializer.get_group_name() channel_layer = get_channel_layer() content = { # This "type" passes through to the front-end to facilitate # our Redux events. "type": "UPDATE_FOO", "payload": serializer.data, } await channel_layer.group_send(group_name, { # This "type" defines which handler on the Consumer gets # called. "type": "notify", "content": content, }) And then our models relies on three things: an override in the save method, the FieldTracker from django-model-utils, and calling the update method from notifications wrapped in asgiref.sync.async_to_sync. This looks like: from django.db import models # Using FieldTracker from django-model-utils helps you only send # updates when something actually changes. from model_utils import FieldTracker from asgiref.sync import async_to_sync class Foo(models.Model): tracker = FieldTracker(fields=("bar",)) bar = models.CharField(max_length=100) def save(self, *args, **kwargs): ret = super().save(*args, **kwargs) has_changed = self.tracker.has_changed("bar") if has_changed: # This is the wrapper that lets you call an async # function from inside a synchronous context: async_to_sync(update_foo)(self) return ret TestingCopy permalink to “Testing” Testing async code with pytest is best done with the pytest-asyncio package. This allows you to write tests that are themselves async functions, if you use the @pytest.mark.asyncio marker on them. The Channels docs have some more details on how to test consumers this way. The one caution I can offer is be sure to read from your consumer at each point where you expect it to have new data, or your tests may fall down with hard-to-diagnose timeout errors. So your tests will look a little like this: connected, _ = await communicator.connect() assert connected await communicator.send_json_to({ "model": "as.Appropriate", "id": str(some_model.id), }) assert await communicator.receive_nothing() await some_notification_async_function() response = await communicator.receive_json_from() assert response == { # ... whatever you expect } await communicator.disconnect() Final ThoughtsCopy permalink to “Final Thoughts” This is a work in progress, of course. As we iron out the kinks, I intend to wrap up the easily isolated pieces of logic into a package we can distribute. I think that this will involve a particular Consumer, a serializer mixin, a model mixin, and a particular notifications module. One particular problem we’ve found, and not yet solved, is what happens when you change a serializer based on the requesting user. For example, if you want to only show a restricted version of the User unless it is the user requesting their own information, how do we handle this when serializing for the websocket? I don’t have a good answer yet. Let us know if you try this, or have ideas for improvements! This is new ground for me, and I’d love to have some different perspectives on it.
https://www.oddbird.net/2018/12/12/channels-and-drf/
CC-MAIN-2022-40
refinedweb
1,682
56.86
SYNOPSIS #include <signal.h> int pthread_sigmask(int how, const sigset_t *restrict set, sigset_t *restrict oset); int sigprocmask(int how, const sigset_t *restrict set, sigset_t *restrict oset); DESCRIPTION The pthread_sigmask() function shall examine or change (or both) the calling thread's sig- nals to be used to change the currently blocked set. The argument how indicates the way in which the set is changed, and the application shall ensure it consists of one of the following values: process' sig- nal mask shall be unchanged; thus the call can be used to enquire about currently blocked signals. If there are any pending unblocked signals after the call to sigproc- mask(), at least one of those signals shall be delivered before the call to sigprocmask() returns. It is not possible to block those signals which cannot be ignored. This shall be enforced by the system without causing an error to be indicated.. ERRORS]. The following sections are informative. EXAMPLES
http://www.linux-directory.com/man3/pthread_sigmask.shtml
crawl-003
refinedweb
156
57.91
I’ve been looking into different Javascript frontends to see if they made my development life easier. I’ve been looking specifically to see if they improve on some Stimulus sprinkles and server rendered HTML. One interesting example was converting a text area from markdown to html. It uses marked to perform the conversion, and it uses lodash to set up a timer for the conversion. I’ll show you today how you perform the same behavior using a stimulus controller, without the need for the lodash dependency. HTML Vue.js Vue.js has a very compact format for HTML annotations, so the HTML is pretty compact, and it includes the required javascript libraries: <script src=""></script> <script src=""></script> <div id="editor"> <textarea :</textarea> <div v-</div> </div> Stimulus.js Stimulus has the advantage of not using ids to connect HTML to the Javascript code. Although it does require more characters, I think you’ll see the clarity of what this HTML snippet attempts to setup: <div class="editor" data- <textarea data-</textarea> <div data-</div> </div> You see that there is a controller in charge of this div, you see where the action comes from, and you see where the compiled markdown is going to go. Javascript Vue.js Vue.js uses a standard component, keyed off of the div’s id property. It set’s up some default data, handles changes to the textarea, and updates the compiled markdown html: new Vue({ el: '#editor', data: { input: '# hello' }, computed: { compiledMarkdown: function () { return marked(this.input, { sanitize: true }) } }, methods: { update: _.debounce(function (e) { this.input = e.target.value }, 300) } }) Stimulus.js We’ll use a Stimulus controller to watch for updates to the text editor, compile the markdown, and update the markdown view. This goes in a markdown_editor_controller.js: import { Controller } from "stimulus" import marked from 'marked/lib/marked.js' export default class extends Controller { static targets = ["viewer"] convertToMarkdown(event) { this.viewerTarget.innerHTML = marked(event.target.value, {sanitized: true});; } } marked.js is added via webpacker, so you need to run yarn add marked This will make marked importable in our controller. Conclusion I hope this shows some of the similarities and differences between Vue.js and Stimulus.js. Hopefully you have another data point when deciding on what type of Javascript framework to pick. And if you’re using Ruby and Rails, you can leverage it’s server rendered HTML to do a lot of the layout work for you, without needing to resort entirely to a JSON API and Javascript templating. You can see that both frameworks let you write interactive web apps. Vue.js requires more setup in Javascript, and Stimulus.js puts more in the HTML structure. Want To Learn More? Try out some more of my Stimulus.js Tutorials.
https://johnbeatty.co/2018/11/09/build-a-markdown-editor-in-stimulus-js-like-in-vue-js/
CC-MAIN-2019-35
refinedweb
465
56.25
Building addons in TypeScript offers many of the same benefits as building apps that way: it puts an extra tool at your disposal to help document your code and ensure its correctness. For addons, though, there's one additional bonus: publishing type information for your addons enables autocomplete and inline documentation for your consumers, even if they're not using TypeScript themselves. To process .ts files, ember-cli-typescript tells Ember CLI to register a set of Babel plugins so that Babel knows how to strip away TypeScript-specific syntax. This means that ember-cli-typescript operates according to the same set of rules as other preprocessors when used by other addons. Like other addons that preprocess source files, ember-cli-typescript must be in your addon's dependencies, not devDependencies. Because addons have no control over how files in app/ are transpiled, you cannot have .ts files in your addon's app/ folder. When you publish an addon written in TypeScript, the .ts files will be consumed and transpiled by Babel as part of building the host application the same way .js files are, in order to meet the requirements of the application's config/targets.js. This means that no special steps are required for your source code to be consumed by users of your addon. Even though you publish the source .ts files, though, by default you consumers who also use TypeScript won't be able to benefit from those types, because the TS compiler isn't aware of how ember-cli resolves import paths for addon files. For instance, if you write import { foo } from 'my-addon/bar';, the typechecker has no way to know that the actual file on disk for that import path is at my-addon/addon/bar.ts. In order for your addon's users to benefit from type information from your addon, you need to put .d.ts declaration files at the location on disk where the compiler expects to find them. This addon provides two commands to help with that: ember ts:precompile and ember ts:clean. The default ember-cli-typescript blueprint will configure your package.json to run these commands in the prepublishOnly and postpublish phases respectively, but you can also run them by hand to verify that the output looks as you expect. The ts:precompile command will populate the overall structure of your package with .d.ts files laid out to match their import paths. For example, addon/index.ts would produce an index.d.ts file in the root of your package. The ts:clean command will remove the generated .d.ts files, leaving your working directory back in a pristine state. The TypeScript compiler has very particular rules when generating declaration files to avoid letting private types leak out unintentionally. You may find it useful to run ember ts:precompile yourself as you're getting a feel for these rules to ensure everything will go smoothly when you publish. Often when developing an addon, it can be useful to run that addon in the context of some other host app so you can make sure it will integrate the way you expect, e.g. using yarn link or npm link. When you do this for a TypeScript addon, the source files will be picked up in the host app build and everything will execute at runtime as you'd expect. If the host app is also using TypeScript, though, it won't be able to resolve imports from your addon by default, for the reasons outlined above in the Publishing section. You could run ember ts:precompile in your addon any time you change a file, but for development a simpler option is to temporarily update the paths configuration in the host application so that it knows how to resolve types from your linked addon. Add entries for <addon-name> and <addon-name>/* in your tsconfig.json like so: compilerOptions: {// ...other optionspaths: {// ...other paths, e.g. for your app/ and tests/ trees// resolve: import x from 'my-addon';"my-addon": ["node_modules/my-addon/addon"],// resolve: import y from 'my-addon/utils/y';"my-addon/*": ["node_modules/my-addon/addon/*"]}} In-repo addons work in much the same way as linked ones. Their .ts files are managed automatically by ember-cli-typescript in their dependencies, and you can ensure imports resolve correctly from the host by adding entries in paths in the base tsconfig.json file. Note that the in-repo-addon blueprint should automatically add these entries if you have ember-cli-typescript-blueprints installed when you run it. compilerOptions: {// ...other optionspaths: {// ...other paths, e.g. for your tests/ tree"my-app": ["app/*",// add addon app directory that will be merged with the host application"lib/my-addon/app/*"],// resolve: import x from 'my-addon';"my-addon": ["lib/my-addon/addon"],// resolve: import y from 'my-addon/utils/y';"my-addon/*": ["lib/my-addon/addon/*"]}} One difference as compared to regular published addons: you know whether or not the host app is using ember-cli-typescript, and if it is, you can safely put .ts files in an in-repo addon's app/ folder.
https://docs.ember-cli-typescript.com/ts/with-addons
CC-MAIN-2021-04
refinedweb
859
53
This blog will detail how a Microservice can be triggered to run when an event is published into the Kyma environment. To eliminate any barriers, we will utilize the SAP Commerce mock application to send events, but the process flow would be the same if using SAP Commerce or any other connected application. This blog continues from the setup of the SAP Commerce mock application detailed in this blog. The setup of SAP Commerce can be found in this blog. If you already have the application bound to a namespace and an instance of the SAP Commerce Cloud – Events created, you can skip down to the step: Creating the Microservice. In the home workspace of your Kyma instance choose the commerce-mock application and then choose Create Binding and bind the application to your mocks namespace. This will provide access to the commerce-mock application in your mocks namespace. This will also result in one Bound Application showing in the Namespaces view. Choose the mocks namespace tile to open the namespace. Choose the Catalog, Services and then choose SAP Commerce Cloud – Events, which is labeled with commerce-mock. Choose Add to add a service instance, which will allow us to utilize the events within this namespace. Accept the default values and choose Create Instance. Creating the Microservice For the microservice, we will use an example found on the Kyma project github site in the examples repository. Download and save the deployment file found here as deployment.yaml The deployment file consists of a definition to define a Service and a Deployment. Notice how each definition is separated by “—” which is required to define multiple definitions within a single deployment file. The Deployment definition defines a deployment named http-db-service which exposes a number of endpoints. For this example we will focus on the endpoint /events/order/created which will be utilized to capture an order.created event from our mock application. More information regarding this example service can be found here. The Service definition exposes the deployment as a local service url running as http://<service.metadata.name>.<namespace>:<service.spec.ports.port> which in our case will be To subscribe the microservice to an event, we can utilize the CustomResourceDefinition (CRD) subscriptions.eventing.kyma-project.io provided by Kyma. The CRD provides us with a set of parameters to configure our subscriptions which can be found here. CRDs are used to extend the functionality provided by Kubernetes, more detailed information about CRDs can be found here. Copy the contents of this subscription definition into the deployment.yaml file making sure to include the separator. --- apiVersion: eventing.kyma-project.io/v1alpha1 kind: Subscription metadata: name: commerce-mock-subscription labels: example: commerce-mock-subscription spec: endpoint: include_subscription_name_header: true push_request_timeout_ms: 2000 max_inflight: 10 event_type: order.created event_type_version: v1 source_id: commerce-mock Take note that if you are not following this example exactly that the endpoint, event_type, event_type_version and source_id may need to be modified. The first three values I believe are self explanatory and the source_id represents the name of the application as shown in the Applications list found in the home workspace. With the subscription added to the deployment.yaml we can now deploy the example into our mocks namespace. Within the mocks overview screen click the Deploy new resource to the namespace button and choose and upload the deployment.yaml file. You should receive a message indicating that the resource has been created after uploading the file. To verify that the subscription has been registered correctly, choose the Logs option in the home workspace. We can use the following label to view the subscription result which should show that it has been created and reconciled as shown in the image below. {app=”event-bus-subscription-controller-knative”} After verifying that the subscription has been registered, open the commerce mock application, choose Remote APIs and then choose SAP Commerce Cloud – Events. This will navigate you to a page where you can send an event and also provides details regarding the API. In Event Topics choose order.created.v1 from the drop down list for the desired event. This will place an entry into the field below, which can be edited as desired. This is the data object that will be sent and consumed by the microservice. Press the Send Event button to trigger the event. To verify that the event has been consumed open the logs and use the following search value to verify the output of microservice. You should find the message “Handling event … with my custom logic”. {app=”http-db-service”}Handling Next, explore the tracing features of Kyma, which are provided by Jaeger. The intent of this section is to outline how a trace should appear for a working microservice. Using a known working example should help identify possible focus areas in the case of troubleshooting issues. In the home workspace choose the Tracing menu. Within the Find Traces pane, set commerce-mock-event-service.kyma-integration as the chosen service and then choose the Find Traces to show the results of the event order.created being sent. Notice how the event has spanned multiple services. Any of these services could be used as the selected service in the Find Traces to show the same information. Choose the trace to obtain further information. Here you can see the entire flow of trace in the order it was processed, and the time utilized by each service. Choosing the http-db-service.mocks we can verify that the event was published to the http-db-service endpoint.
https://blogs.sap.com/2019/07/16/sap-c4hana-extensibility-triggering-microservices-with-events/
CC-MAIN-2020-24
refinedweb
928
56.25
Set a clock #include <time.h> int clock_settime( clockid_t id, const struct timespec * tp );. The clock_settime() function sets the clock specified by id to the time specified in the buffer pointed to by tp. If you change the time for CLOCK_REALTIME, the change occurs immediately, and the expiry time for some timers might end up in the past. In this case, the timers will expire on the next clock tick. If you need the affected timers to expire before the next clock tick, then before changing the time, set a high-resolution timer to expire just after the new time; after you change the time, the high-resolution timer will expire and so will all the affected timers. /*; }
https://www.qnx.com/developers/docs/7.1/com.qnx.doc.neutrino.lib_ref/topic/c/clock_settime.html
CC-MAIN-2022-05
refinedweb
118
68.91
Fieldin Fields. Authorand a Book Is there any good example for Subscription with Spring Boot application? I have tried the example in spring boot examples repo, but It is stacked at: "Your subscription data will appear here after server publication!" I have found this too: but dependencies are missing and It is Kotlin and not Java... Hi! I’ve two queries which don’t have parameters: GetInventory(): Int! and: LogoutUser(): null Both fail with following error: Syntax Error: Expected Name, found ) If I add name: String it works but they don’t need arguments. Can you please assist me solving this error? I use import {buildSchema} from 'graphql‘; to build the schema for express-graphql server. const schema = buildSchema([typeDefs, operations].join('\r\n')); Where typeDefs is a concatenated string of all types and operations contains concatenated mutations and queries. this is the contents of operations: /meRPC to get current user data.
https://gitter.im/graphql-go/graphql?at=5bd9b068995818347b7f2c4e
CC-MAIN-2020-34
refinedweb
152
56.86
19+ JavaScript Shorthand Coding TechniquesBy Michael Wanyoike , Sam Deering This answer; if (x > 10) { answer = 'is greater'; } else { answer = 'is lesser'; } Shorthand: const answer = x > 10 ? 'is greater' : 'is lesser'; You can also nest your if statement like this: const big = x > 10 ? " greater 10" : x index)); functions with a single statement will implicitly return the result its evaluation (the function must omit the braces ( {}) in order to omit the return keyword). To return a multi-line statement (such as an object literal), it’s necessary to use () instead of {} to wrap your function body. This ensures the code is evaluated as a single statement. Longhand: function calcCircumference(diameter) { return Math.PI * diameter } Shorthand: calcCircumference = diameter => ( Math.PI * diameter; ) 11. Default Parameter Values You can use the if statement to define default values for function parameters. In ES6, you can define the default values in the function declaration itself. Longhand: function volume(l, w, h) { if (w === undefined) w = 3; if (h === undefined) h = 4; return l * w * h; } Shorthand: volume = (l, w = 3, h = 4 ) => (l * w * h); volume(2) //output: 24 12. Template Literals Aren’t you tired of using ' + ' to concatenate multiple variables into a string? Isn’t there a much easier way of doing this? If you are able to use ES6, then you are in luck. All you need to do is use is the backtick, and ${} to enclose your variables. Longhand: const welcome = 'You have logged in as ' + first + ' ' + last + '.' const db = 'http://' + host + ':' + port + '/' + database; Shorthand: const welcome = `You have logged in as ${first} ${last}`; const db = `{host}:${port}/${database}`; 13. Destructuring Assignment Shorthand If you are working with any popular web framework, there are high chances you will be using arrays or data in the form of object literals to pass information between components and APIs. Once the data object reaches a component, you’ll need to unpack it. Longhand: const observable = require('mobx/observable'); const action = require('mobx/action'); const runInAction = require('mobx/runInAction'); const store = this.props.store; const form = this.props.form; const loading = this.props.loading; const errors = this.props.errors; const entity = this.props.entity; Shorthand: import { observable, action, runInAction } from 'mobx'; const { store, form, loading, errors, entity } = this.props; You can even assign your own variable names: const { store, form, loading, errors, entity:contact } = this.props; 14. Multi-line String Shorthand If you have ever found yourself in need of writing multi-line strings in code, this is how you would write it: Longhand: const lorem = 'Lorem ipsum dolor sit amet, consectetur\n\t' + 'adipisicing elit, sed do eiusmod tempor incididunt\n\t' + 'ut labore et dolore magna aliqua. Ut enim ad minim\n\t' + 'veniam, quis nostrud exercitation ullamco laboris\n\t' + 'nisi ut aliquip ex ea commodo consequat. Duis aute\n\t' + 'irure dolor in reprehenderit in voluptate velit esse.\n\t' But there is an easier way. Just use backticks. Shorthand: const.` 15. Spread Operator Shorthand The spread operator, introduced in ES6, has several use cases that make JavaScript code more efficient and fun to use. It can be used to replace certain array functions. The spread operator is simply a series of three dots. Longhand // joining arrays const odd = [1, 3, 5]; const nums = [2 ,4 , 6].concat(odd); // cloning arrays const arr = [1, 2, 3, 4]; const arr2 = arr.slice() Shorthand: // joining arrays const odd = [1, 3, 5 ]; const nums = [2 ,4 , 6, ...odd]; console.log(nums); // [ 2, 4, 6, 1, 3, 5 ] // cloning arrays const arr = [1, 2, 3, 4]; const arr2 = [...arr]; Unlike the concat() function, you can use the spread operator to insert an array anywhere inside another array. const odd = [1, 3, 5 ]; const nums = [2, ...odd, 4 , 6]; You can also combine the spread operator with ES6 destructuring notation: const { a, b, ...z } = { a: 1, b: 2, c: 3, d: 4 }; console.log(a) // 1 console.log(b) // 2 console.log(z) // { c: 3, d: 4 } 16. Mandatory Parameter Shorthand By default, JavaScript will set function parameters to undefined if they are not passed a value. Some other languages will throw a warning or error. To enforce parameter assignment, you can use an if statement to throw an error if undefined, or you can take advantage of the ‘Mandatory parameter shorthand’. Longhand: function foo(bar) { if(bar === undefined) { throw new Error('Missing parameter!'); } return bar; } Shorthand: mandatory = () => { throw new Error('Missing parameter!'); } foo = (bar = mandatory()) => { return bar; } 17. Array.find Shorthand If you have ever been tasked with writing a find function in plain JavaScript, you would probably have used a for loop. In ES6, a new array function named find() was introduced. Longhand: const pets = [ { type: 'Dog', name: 'Max'}, { type: 'Cat', name: 'Karl'}, { type: 'Dog', name: 'Tommy'}, ] function findDog(name) { for(let i = 0; i<pets.length; ++i) { if(pets[i].type === 'Dog' && pets[i].name === name) { return pets[i]; } } } Shorthand: pet = pets.find(pet => pet.type ==='Dog' && pet.name === 'Tommy'); console.log(pet); // { type: 'Dog', name: 'Tommy' } 18. Object [key] Shorthand Did you know that Foo.bar can also be written as Foo['bar']? At first, there doesn’t seem to be a reason why you should write it like that. However, this notation gives you the building block for writing re-usable code. Consider this simplified example of a validation function: function validate(values) { if(!values.first) return false; if(!values.last) return false; return true; } console.log(validate({first:'Bruce',last:'Wayne'})); // true This function does its job perfectly. However, consider a scenario where you have very many forms where you need to apply the validation but with different fields and rules. Wouldn’t it be nice to build a generic validation function that can be configured at runtime? Shorthand: // object validation rules const schema = { first: { required:true }, last: { required:true } } // universal validation function const validate = (schema, values) => { for(field in schema) { if(schema[field].required) { if(!values[field]) { return false; } } } return true; } console.log(validate(schema, {first:'Bruce'})); // false console.log(validate(schema, {first:'Bruce',last:'Wayne'})); // true Now we have a validate function we can reuse in all forms without needing to write a custom validation function for each. 19. Double Bitwise NOT Shorthand Bitwise operators are one of those features you learn about in beginner JavaScript tutorials and you never get to implement them anywhere. Besides, who wants to work with ones and zeroes if you are not dealing with binary? There is, however, a very practical use case for the Double Bitwise NOT operator. You can use it as a replacement for Math.floor(). The advantage of the Double Bitwise NOT operator is that it performs the same operation much faster. You can read more about Bitwise operators here. Longhand: Math.floor(4.9) === 4 //true Shorthand: ~~4.9 === 4 //true 20. Suggest One? I really do love these and would love to find more, please leave a comment!
https://www.sitepoint.com/shorthand-javascript-techniques/
CC-MAIN-2017-30
refinedweb
1,151
58.79
Given: Bread.java: public class Bread { private String eat(String piece) { return "Consume " + piece; } } Pizza.java: public class Pizza extends Bread { public String eat (String slice) { return "Enjoy " + slice; } } Test.java: public class Test { public static void main (String[] args) { Bread b1 = new Bread(); b1.eat("bread."); Bread b2 = new Pizza(); b2.eat("Quattro Stagioni."); } } What is the result? A. Consume bread. Enjoy Quattro Stagioni. B. Consume bread. Consume Quattro Stagioni. C. The Pizza.java file fails to compile. D. The Test.java file fails to compile. The correct answer is D. The eat() method in the parent class Bread is private.
http://igor.host/index.php/2017/08/06/ocp-question-34-explanation/
CC-MAIN-2017-39
refinedweb
102
71.41
It’s common in C and C++ to use entirely uppercase names for constants, for example: const int LOWER = 0; const int UPPER = 300; const int STEP = 20; enum MetaSyntacticVariable { FOO, BAR, BAZ }; I think this is a terrible convention in C++ code and if you don’t already agree I hope I can convince you. Maybe one day we can rid the world of this affliction, except for a few carefully-controlled specimens kept far away where they can’t hurt anyone, like smallpox. Using uppercase names for constants dates back (at least) to the early days of C and the need to distinguish symbolic constants, defined as macros or enumerators, from variables: Symbolic constant names are conventionally written in upper case so they can be readily distinguished from lower case variable names. [Kernighan88] The quoted text follows these macro definitions: #define LOWER 0 /* lower limit of table */ #define UPPER 300 /* upper limit */ #define STEP 20 /* step size */ This convention makes sense in the context of C. For many years C didn’t have the const keyword, and even today you can’t use a const variable where C requires a constant-expression, such as declaring the bounds of a (non-variable length) array or the size of a bitfield. Furthermore, unlike variables, symbolic constants don’t have an address and can’t be assigned new values. So I will grudgingly admit that macros are a necessary evil for defining constants in C, and distinguishing them can be useful and a consistent naming convention helps with that. Reserving a set of identifiers (in this case, ‘all names written in uppercase’) for a particular purpose is a form of namespace, allowing you to tell at a glance that the names STEP and step are different and, by the traditional C convention, allowing you to assume one is a symbolic constant and the other is a variable. Although some form of ad hoc namespace may be useful to tell symbolic constants and variables apart, I think it’s very unfortunate that the traditional convention reserves names that are so VISIBLE in the code and draw your ATTENTION to something as mundane as symbolic constants. An alternative might have been to always use a common prefix, say C_, for symbolic constants, but it’s too late to change nearly half a century of C convention now. C’s restrictions on defining constants aren’t present in C++, where a const variable (of suitable type) initialised with a constant expression is itself a constant expression and where constexpr functions can produce compile-time constants involving non-trivial calculations on other constants. C++ also supports namespaces directly in the language, so the constants above could be defined as follows and referred to as FahrenheitToCelsiusConstants::step instead of STEP: namespace FahrenheitToCelsiusConstants { // lower & upper limits of table, step size enum Type { lower=0, upper=300, step=20 }; } That means C++ gives you much better tools than macros for defining properly typed and scoped constants. Macros are very important in C but have far fewer uses in C++. The first rule about macros is: Don’t use them unless you have to. [Stroustrup00] There are good reasons for avoiding macros apart from the fact that C++ provides higher-level alternatives. Many people are familiar with problems caused by the min and max macros defined in <windows.h>, which interfere with the names of function templates defined in the C++ standard library. The main problem is that macros don’t respect lexical scoping, they’ll stomp over any non-macro with the same name. Functions, variables, namespaces, you name it, the preprocessor will happily redefine it. Preprocessing is probably the most dangerous phase of C++ translation. The preprocessor is concerned with tokens (the “words” of which the C++ source is composed) and is ignorant of the subtleties of the rest of the C++ language, both syntactic and semantic. In effect the preprocessor doesn’t know its own strength and, like many powerful ignoramuses, is capable of much damage. [Dewhurst02] Stephen Dewhurst devotes a whole chapter to gotchas involving the preprocessor, demonstrating how constants defined as macros can behave in unexpected ways, and ‘pseudofunctions’ defined as macros may evaluate arguments more than once, or not at all. So given that macros are less necessary and (in an ideal codebase) less widely-used in C++, it is important when macros are used to limit the damage they can cause and to draw the reader’s attention to their presence. We can’t use C++ namespaces to limit their scope, but we can use an ad hoc namespace in the form of a set of names reserved only for macros to avoid the problem of clashing with non-macros and silently redefining them. Conventionally we use uppercase names (and not single-character names, not only are short names undescriptive and unhelpful for macros, single-character names like T are typically used for template parameters). Also to warn readers, follow the convention to name macros using lots of capital letters. [Stroustrup] By convention, macro names are written in uppercase. Programs are easier to read when it is possible to tell at a glance which names are macros. [GCC] Using uppercase names has the added benefit of SHOUTING to draw ATTENTION to names which don’t obey the usual syntactic and semantic rules of C++. Do #undefine macros as soon as possible, always give them SCREAMING_UPPERCASE_AND_UGLY names, and avoid putting them in headers. [Sutter05] When macro names stand out clearly from the rest of the code you can be careful to avoid reusing the name and you know to be careful using them e.g. be aware of side-effects being evaluated twice: #define MIN(A,B) (A) < (B) ? (A) : (B) const int limit = 100; // ... return MIN(++n, limit); But if you also use all-uppercase names for non-macros then you pollute the namespace. You no longer have the advantage of knowing which names are going to summon a powerful ignoramus to stomp on your code (including the fact that your carefully-scoped enumerator named FOO might not be used because someone else defined a macro called FOO with a different value), and the names that stand out prominently from the rest of the code might be something harmless and mundane, like the bound of an array. Constants are pretty dull, the actual logic using them is usually more interesting and deserving of the reader’s attention. Compare this to the previous code snippet, assuming the same definitions for the macro and constant, but with the case of the names changed: return min(++n, LIMIT); Is it more important to note that you’re limiting the return value to some constant LIMIT, rather than the fact that n is incremented? Or that you’re calling min rather than max or some other function? I don’t think LIMIT should be what grabs your attention here, it doesn’t even tell you what the limit is. It certainly isn’t obvious that n will be incremented twice! So I’d like to make a plea to the C++ programmers of the world: stop naming (non-macro) constants in uppercase. Only use all-uppercase for macros, to warn your readers and limit the damage that the powerful ignoramus can do. References [Dewhurst02] C++ Gotchas, Stephen C. Dewhurst, Addison Wesley, 2002. [Kernighan88] The C Programming Language, Second Edition, Brian W. Kernighan & Dennis M. Ritchie, Prentice Hall, 1988. [GCC] ‘The GNU Compiler Collection: The C Preprocessor’, Free Software Foundation, 2014, [Stroustrup00] The C++ Programming Language, Special Edition, Bjarne Stroustrup, Addison-Wesley, 2000. [Sutter05] C++ Coding Standards, Herb Sutter & Alexei Alexandrescu, Addison-Wesley, 2005.
https://accu.org/index.php/journals/1923
CC-MAIN-2020-29
refinedweb
1,277
55.07
Hi, I would like to use my webcam as a photo camera. I would like to have a function like "Mat GetOneFrame();" which I can call in short distances to deliver just one up to date frame. How can I do this? I normally use my camera like this: #include "opencv2/opencv.hpp" using namespace cv; int main(int, char**) { VideoCapture cap(0); if(!cap.isOpened()) return -1; namedWindow("video",1); for(;;) { Mat frame; cap >> frame; imshow("video", frame); if(waitKey(30) >= 0) break; } return 0; } I have tried to delete the for loop and then to use "cap>>fame" whenever I need it. But if I try this, my programm freezes even if I wait a second before trying to get a new frame. What can I do? Thank you very much :-)
https://answers.opencv.org/questions/46532/revisions/
CC-MAIN-2019-43
refinedweb
133
83.25
Creating Smart Tags in Web PartsThis content is no longer actively maintained. It is provided as is, for anyone who may still be using these technologies, with no warranties or claims of accuracy with regard to the most recent product version or service release. Reza Chitsaz and Scott Phibbs Microsoft Corporation Created: April 2001 Revised: July 2001 Applies to: Microsoft® Office XP Developer Summary: Microsoft Office XP installs a helper behavior that enables Smart Tag support in Internet Explorer. The Smart Tag feature in Internet Explorer is present in a slightly different way than in Microsoft Word and Microsoft Excel. This document explains how to implement this technology within Web Parts of a Digital Dashboard. (6 printed pages) Contents Introduction Creating a Dashboard Project Creating HTML Web Parts with Smart Tags Saving Excel or Word Documents as Web Parts Containing Smart Tags Introduction There are two ways to implement Smart Tags within Web Parts. The first way is to create a Microsoft® Word or Microsoft Excel document that contains Smart Tags and save the document as a Web Part to a Microsoft SharePoint™ Portal Server or Microsoft Exchange 2000 SP1 Server dashboard. The other way is to add specific custom HTML tags to an HTML Web Part created in Microsoft Office XP Developer. This white paper will discuss these two methods of implementation using the sample SimpleTerm Smart Tag discussed in the Office XP Developer Smart Tag SDK. Prerequisites To follow along with this walkthrough, you must have Office XP Developer, access to a server running SharePoint Portal Server or Exchange 2000 SP1, and the appropriate permissions on the server. Creating a Digital Dashboard requires the following components: - Server SharePoint Portal Server or Exchange 2000 SP1 Server as the back end for the application. - Development tool Office XP Developer as the tool for creating the dashboard. This tool can either reside on a client computer with access to the server or on the server itself. The Dashboard Development Tools and Smart Tag SDK components must be installed. - Smart Tags Support Smart Tags installed on the development and client machines. Smart Tags support is included as part of the Office XP install. - Smart Tag SimpleTerm DLL registered using the following steps: - Run the SimpleTerm.reg file. - Register the SimpleTerm.dll using regsrv32.exe.Note These files are installed by default in the directory \Program Files\Microsoft Office Developer\SDK\Simple VB Sample. Creating a Dashboard Project A Digital Dashboard is an Active Server Page (ASP) that references one or more Web Parts. Web Parts are self-contained modules of Web content (XML, HTML, or script). You can use the Office XP Developer environment to create Digital Dashboard projects. When you do, Office Developer creates a Digital Dashboard project and adds a default HTML Web Part to the project. To create a dashboard project: - From the Start menu, point to Programs, point to Microsoft Office XP Developer, and then click Microsoft Development Environment. - On the File menu, point to New, and then click Project. The New Project dialog box is displayed. - Select Office Developer Projects. - Under Templates, select the Dashboard Project icon. - In the Location box, enter http://<ServerName>/Public. - In the Name box, enter a name for your dashboard project. - Click OK. These steps create a new project in your application and populate it with the necessary files. In this case, all the files required by the Digital Dashboard are copied onto the server and a <ProjectName>.ddp project file is added. The next step is to create a Web Part. Creating HTML Web Parts with Smart Tags Now that you have created a new dashboard project and have it open in the Office XP Development environment, you will add custom HTML tags to the default HTM Web Part to enable Smart Tag support. - In the Solution Explorer, double-click the Part.HTM Web Part that is added automatically to your dashboard project to open it in the Code Editor. - In the Code Editor, you should see default code that looks like this: - Enter this code between the two comments in the Code Editor: This declares the Office namespace, the Smart Tags namespace, and the Worldwide Web Consortium (W3C) HTML 4.0 namespace.Note Each Smart Tag can have its own namespace. Smart Tag namespaces are declared at the top of the HTML document. - Declare the Smart Tag type element by entering this code following the Smart Tags namespace: The SmartTagType element belongs to the Office namespace, as can be inferred from o:, which is the shorthand alias for the Office namespace. There are three attributes for this element: - name The name of the Smart Tag type. This attribute is required. - namespaceuri The namespace URI for the Smart Tag type. This attribute is required. - downloadurl A URL useful for downloading a Smart Tags package. This attribute is optional. The o:SmartTagType element is declared in the <HEAD>of an HTML document. - Insert the following object element code after the Smart Tag type declaration to enable Internet Explorer to render Smart Tags: - Enter the following code after the object element code, and add a closing </HEAD>tag to include the behavior attribute within the style declaration: - Enter this code after the </HEAD>tag: This is the body of the HTML document, which contains the paragraph you want displayed with Smart Tags. Portions of text to be Smart Tag enabled should be contained in their respective Smart Tag type namespaces. The complete Part.HTM Web Part source code should now look like this: <HTML><BODY> <!-- Do not edit anything above this comment --> <HTML xmlns: <HEAD> <o:SmartTagType </o:smarttagtype> <object classid="clsid:38481807-CA0E-42D2-BF39-B33AF135CC4D" id=ieooui> </object> <style> st1\:*{behavior:url(#ieooui) } </style> </HEAD> <BODY> I love drinking <st1:flavor>Latte</st1:flavor>! </BODY> </HTML> <!-- Do not edit anything below this comment --> </BODY></HTML> - Save all the changes that you made to the Web Part. - Preview your dashboard by right-clicking the Project folder (named Dashboard by default) in the Solution Explorer and selecting the View in Browser command from the shortcut menu. After following the previous steps, you can hover over the text that has been Smart Tag enabled to display the Smart Tag actions. Saving Excel or Word Documents as Web Parts Containing Smart Tags A fast and easy way to create Web Parts that contain Smart Tags is by using Office XP. You can save your Office XP documents that contain Smart Tags as Web pages to a dashboard folder that lives on an Exchange 2000 SP1 server or SharePoint Portal Server. This will display a Web Part meta-data input form that prompts you for the Web Part name, description, and zone. To save an Office XP document as a Web Part: - Open an Office XP document and enter content, including recognized Smart Tags.Note If Smart Tags are not being recognized, you might have to recheck the document for Smart Tags. On the Tools menu, click AutoCorrect Options, and then click the Smart Tags tab. Click Recheck Document. - Select Save as Web Page from the File menu. - Enter the complete HTTP address of the dashboard, including a name for your Web Part, in the File Name box, for example, http://<ServerName>/Public/<DashboardName>/SmartTagsPart.HTM. - Click Save. The Web File Properties dialog box will appear. - In the Web File Properties dialog box, enter in a name, a description, and a zone for where you want your Web Part to be displayed. - Click OK. - Close the Office XP document. - In Office XP Developer, refresh the dashboard view by clicking the Refresh button in the Solution Explorer to display the new Office XP Web Part. Now you can double-click the Office XP Web Part and edit its contents from within the Office XP Developer development environment. - Preview your dashboard by right-clicking the dashboard folder in the Solution Explorer and selecting the View in Browser command from the shortcut menu. The previous steps show how to save an Office XP document that contains a Smart Tag as a Web Part to a Digital Dashboard. Previewing the dashboard in Internet Explorer and hovering over the Smart Tag enabled word within the Web Part will display the Smart Tag actions, thus showing that the Smart Tag has been retained within the Web Part. For more information about Digital Dashboards and Web Parts, see the Microsoft Office XP Developer online Help.
https://msdn.microsoft.com/en-us/library/aa188446(v=office.10).aspx
CC-MAIN-2015-32
refinedweb
1,402
61.56
This preview shows pages 1–3. Sign up to view the full content. View Full Document This preview has intentionally blurred sections. Unformatted text preview: CHAPTER 4 I. Money Market and the LM curve (Liquidity Market) A. Wealth Portfolio: 1. Money - currency and checkable deposits at commercial banks and thrift institutions. a. Medium of exchange b. Store of value c. Unit of account (Dollars) d. M1= currency + transaction accts (demand deposits, other checkable deposits, Traveler’s checks) e. M2= M1+ other highly liquid assets (saving deposits, money market accts, small-denomination CDs) 2. Bonds (debt), stocks (equity), and other assets that earn a rate of return (i.e., nonmonetary financial assets) 3. Real assets: real estate, autos, etc. 4. Portfolio decisions: individuals must decide how to allocate wealth between alternative assets a. Wealth held in assets that have a rate of return → ↑ future wealth (and cons.) b. Wealth held as money → ↓ transaction costs from making purchases (i.e., cost of conversion of wealth into money which is the only asset accepted as a means of payment) 5. Note: Price of stocks and bonds move inversely with the interest rate (any nonmonetary financial asset) a. P Bond ≅ Coupon ÷ r b. P Stock ≅ [Dividend + Capital Gain] ÷ r B. Demand for Real Money Balances (M d /P) or Liquidity Preferences (L) ⇒ M d /P required to purchase a fixed quantity of goods and services 1. M d = Nominal Money Balances and P = Price Level (held constant in IS-LM) 2 2. M d /P = L is independent of P (i.e., all prices double ⇒ income and M d doubles to purchase same quantity of goods ⇒ L remains constant) 3. L depends on income (Y) and the interest rate (r) a. Amount of L depends on the opportunity cost of holding money (i.e., interest rate paid on nonmonetary financial assets). i. the higher is r , the more costly the holding of money (i.e., earnings foregone from holding money). ii. ˆ r and L are inversely related (i.e, ↓ r → ↑ L ) iii. Opportunity cost of money is the risk free interest rate which is approximated by the Federal Funds and 3 month Treasury Bill short-term interest rates. iv. Assume that interest rate responsiveness ( f ) of M d /P = ! 200 ⇒ Δ ( M d /P ) = f ( Δ r ) = ! 200 ( Δ r ) v. Graph 4-1 b. Amount of transactions for goods & services depends on the level of Y i. ↑ Y → ↑ level of transactions (note: Δ Cons = MPC ( Δ Y )) ii. ˆ in Y and L are positively related (i.e., ↑ Y → ↑ L ) iii. Income responsiveness ( h ) of M d /P = .5 ⇒ Δ ( M d /P ) = h ( Δ Y ) = .5 ( Δ Y ). iv. Graph 4-2 C. Money (liquidity) demand function: M d /P = hY ! fr = .5 Y ! 200 r D. Equilibrium in Money Market occurs a pt. where M d /P = M s /P 1. Real money supply = M s /P 2. Nominal money supply = M s ( M s is created and controlled by Federal Reserve) a. Fed’s primary asset: US Treasury Bonds and Treasury Bills.... View Full Document This note was uploaded on 04/10/2008 for the course ECON 302 taught by Professor Adamson during the Spring '08 term at SD State. - Spring '08 - Adamson Click to edit the document details
https://www.coursehero.com/file/104459/Chapter4-302/
CC-MAIN-2017-17
refinedweb
550
70.09