text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hello all this is my first post and I have been lurking about since I have found this site. So here is my dilemma, I have a problem for school that will generate 3 random numbers between 0 and 9. GOT THAT I THINK.. it allows for users to pick 3 random numbers and compare them to the ones that have been generated. The awards are as follows: any one matching = $10, two matching = $100, three matching not in order = $1000, and three matching in order = $1,000,000. So I'm having an issue with my if statements so that they cover all the bases. I would love some feedback and what I can do to better the code that I have done since I am a newbie. should I make a struct for numbers like for guesse, generated, ? something along those lines?
Here is what I have:
Code:
// lottery.cpp : main project file.
#include "stdafx.h"
#include <iostream>
#include <ctime>
using namespace std;
int main ()
{
// Declare variables
int firstGuess,
secondGuess,
thirdGuess,
firstResult,
secondResult,
thirdResult;
cout << "This is a lottery game that allows the user to pick three numbers." << endl;
cout << "The game will randomly pick three numbers as well.." << endl;
cout << "The winnings will be based on the matching of the correct numbers picked." << endl;
cout << "Any one matching = $10, Any two matching = $100," << endl;
cout << "Any three matching, not in order = $1,000." << endl;
cout << "Three matching in exact order = $1,000,000, No matches = $0." << endl;
cout << "Now the fun begins.. GOOD LUCK !!!" << endl;
cout << "Please pick your first number: ";
cin >> firstGuess;
cout << "Please pick your second number: ";
cin >> secondGuess;
cout << "Pleas pick your third number: ";
cin >> thirdGuess;
cout << "Your picked numbers are: " << firstGuess << "--" << secondGuess <<"--" << thirdGuess << endl;
cout << endl;
cout << endl;
cout << "And the winning numbers are.... " <<endl;
// Now draw the winning three numbers
const int DIVISOR = 10;
const int NUM = 3;
srand ((unsigned) time (NULL));
{
firstResult = rand() % DIVISOR;
secondResult = rand() % DIVISOR;
thirdResult = rand() % DIVISOR;
};
cout << firstResult <<"--" << secondResult <<"--" << thirdResult << endl;
cout << endl;
if (firstGuess == firstResult && secondGuess == secondResult && thirdGuess == thirdResult)
cout << "YOU WIN THE GRAND PRIZE!!!! $1,000 $1 $100 " << endl;
return 0;
} | http://cboard.cprogramming.com/cplusplus-programming/123452-help-lottery-problem-printable-thread.html | CC-MAIN-2014-35 | refinedweb | 356 | 79.19 |
Hello everyone :)
My problem is that my input will not be transfered when i build a new Object.
To understand my problem i give you first the output:
Welcome Hero!
Show me your Attributes!
Enter your Name:
KIKI
Enter your Strength:
50
Enter your Health:
60
Enter your Intelligence:
40
Enter your Agility:
30
I am KIKI.
My Strength is 50 my Health is 60
My Intelligence is 40 and my Agility is 30
Please choose your Class you want to be trained in.
Press 1 for WARRIOR!
Press 2 for MAGE!
Press 3 for THIEF!
Press 4 for PRIEST!
1
After years of training im a Warrior now.
My Strength and Health are increased but my Intelligence and Agility are decreased.
I am now null the Destroyer
My Strength is 20 my Health is 50
My Intelligence is -30 and my Agility is -20
I can jump and run.
My Sword Strike penetrates deep into Enemys Flesh!!
The part i i made bolt is where the problem lies.
As you can see my input by strength was 50. In the bolt part its supposed to be 70 (increment of 20) but it only shows me the increment value.
How can i keep the value from the first input so when i choose warrior the input will be increased by 20?
my codes for the superclass are:
import java.util.*; public class Human { Scanner CharSet = new Scanner(System.in); String name; int strength, health, intelligence, agility; public void MyChar() {() { System.out.println("I am " + name + ". "); System.out.println("My Strength is " + strength + " my Health is " + health); System.out.println("My Intelligence is " + intelligence + " and my Agility is " + agility); } public void BasicSkill() { System.out.println("I can jump and run."); } }
My codes for the subclass are:
public class Warrior extends Human { public void Reveal() { System.out.println("After years of training im a Warrior now."); System.out.println("My Strength and Health are increased but my Intelligence and Agility are decreased."); System.out.println("I am now " + name + " the Destroyer "); System.out.println("My Strength is " + (strength +20) + " my Health is " + (health +50)); System.out.println("My Intelligence is " + (intelligence - 30) + " and my Agility is " + (agility - 20)); } public void WarriorSkill() { System.out.println("My Sword Strike penetrates deep into Enemys Flesh!!"); } }
And my codes for the Main:
import java.util.*; public class TestHuman { public static void main(String[] arguements) { Scanner NewClass = new Scanner(System.in); int choose; System.out.println("Welcome Hero!"); System.out.println("Show me your Attributes!"); Human Hobject = new Human(); Hobject.MyChar(); Hobject.Reveal(); System.out.println(""); System.out.println("Please choose your Class you want to be trained in."); System.out.println("Press 1 for WARRIOR!"); System.out.println("Press 2 for MAGE!"); System.out.println("Press 3 for THIEF!"); System.out.println("Press 4 for PRIEST!"); choose = NewClass.nextInt(); if (choose == 1) { Warrior Wobject = new Warrior(); //Wobject.MyChar(); Wobject.Reveal(); Wobject.BasicSkill(); Wobject.WarriorSkill(); } else { System.out.println("When you are not even be able to choose from 1 to 4 you cant become a HERO!"); } } }
Thanks for helping | https://www.daniweb.com/programming/software-development/threads/267661/how-to-keep-the-input | CC-MAIN-2018-13 | refinedweb | 513 | 61.53 |
NAME
Create a VM object referring to a specific contiguous range of physical memory.
SYNOPSIS
#include <zircon/syscalls.h> zx_status_t zx_vmo_create_physical(zx_handle_t resource, zx_paddr_t paddr, size_t size, zx_handle_t* out);
DESCRIPTION
zx_vmo_create_physical() creates a new virtual memory object (VMO), which represents the
size bytes of physical memory beginning at physical address paddr._EXECUTE - May be mapped with execute permissions.
ZX_RIGHT_MAP - May be mapped.
ZX_RIGHT_GET_PROPERTY - May get its properties using
zx_object_get_property().
ZX_RIGHT_SET_PROPERTY - May set its properties using
zx_object_set_property().
The ZX_VMO_ZERO_CHILDREN signal is active on a newly created VMO. It becomes inactive whenever a child of the VMO is created and becomes active again when all children have been destroyed and no mappings of those children into address spaces exist.
NOTES
The VMOs created by this syscall are not usable with
zx_vmo_read() and
zx_vmo_write().
RIGHTS
resource must have resource kind ZX_RSRC_KIND_MMIO.
RETURN VALUE
zx_vmo_create_physical() returns ZX_OK on success. In the event
of failure, a negative error value is returned.
ERRORS
ZER_ERR_WRONG_TYPE resource is not a handle to a Resource object.
ZER_ERR_ACCESS_DENIED resource does not grant access to the requested range of memory.
ZX_ERR_INVALID_ARGS out is an invalid pointer or NULL, or paddr or size are not page-aligned.
ZX_ERR_NO_MEMORY Failure due to lack of memory. There is no good way for userspace to handle this (unlikely) error. In a future build this error will no longer occur. | https://fuchsia.dev/fuchsia-src/reference/syscalls/vmo_create_physical | CC-MAIN-2020-29 | refinedweb | 224 | 51.34 |
A group blog from members of the VB team the true spirit of VB, you’ll see a ton of other improvements that will make you more productive every day.
AsyncAs the world moves to mobile phones and tablets, the demand for responsiveness in today’s applications is higher than ever. Things like database queries, network requests, and disk access all have potential to block the UI and leave users frustrated. While user expectations continue to climb, the tool/platform support for making asynchronous programming easy hasn’t kept pace, until now. With the new Async/Await keywords, VB11 makes asynchronous programming really simple:
Public Async Function GetStorageFile() As Task(Of Windows.Storage.StorageFile)
Dim packageFolder = Windows.ApplicationModel.Package.Current.InstalledLocation
TextBlock1.Text = "Retrieving File..."
Dim packagedFile = Await packageFolder.GetFileAsync("FileLocatedInPackage")
...
Return packagedFile
End Function
The Await keyword kicks off an asynchronous request without blocking the UI. The function returns a Task(Of T) at the point of the Await expression, but this is just a placeholder for the return value that will come from GetFileAsync. Once that work completes, the method resumes and the variable packagedFile is assigned to.
Be sure to check out the Asynchronous Programming Developer Center for articles, videos, and samples on how to use Async. This blog post has a good conceptual explanation of async, and Lucian’s blog has a ton of great resources for learning the feature.
VB11 also includes full async debugging support. F10-Step-Over (or Shift+F8 on VB Profile) now does what you’d expect. If you’re still on the function-declaration-line then it steps out to the caller. But often (e.g. if you’ve gone past an await) then the concept of “caller” doesn’t even exist. So, for consistency, if you’re anywhere outside the declaration line then Shift+F11 (Ctrl+Shift+F8 on VB Profile) will step out to someone who’s awaiting you.
The other thing we’ve added is async unit-testing support in MSTest. xUnit now supports this as well.
<TestMethod>
Async Function Test1() As Task
Dim x = Await Engine.GetSevenAsync()
Assert.AreEqual(x, 6)
<Xunit.Fact>
Async Function Test2() As Task
Dim x = Await Engine.GetSevenAsync()
Xunit.Assert.Equal(x, 6)
IteratorsIterators are a new feature in VB11 that make it easier to walk through collections such as lists and arrays. Each element is returned to the calling method immediately, before the next element in the sequence is accessed.
In addition to working with collections, you can use iterators to write your own custom LINQ query operators. For instance, the following example prints out only the even numbers in the array:
Module Module1
Sub Main()
Dim query = From n In {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Where n Mod 2 = 0
Select n
For Each item In query
Console.WriteLine(item)
Console.ReadLine()
End Sub
<System.Runtime.CompilerServices.Extension>
Iterator Function Where(source As Integer(),
predicate As Func(Of Integer, Boolean)
) As IEnumerable(Of Integer)
For Each num In source
If predicate(num) Then
Console.WriteLine("Yielding " & num)
Yield num
End If
End Module
Note the new Iterator and Yield keywords in the Where function. (The LINQ query binds to this Where extension method since it’s a better match than the Where(Of T) operator defined in the standard query operators).
VB also allows iterator lambdas! In this example, we use an iterator in an expression context and combine it in a powerful way with XML Literals (note that we’re effectively embedding statements inside embedded expressions now):
Dim images =
<html>
<body>
<%=
Iterator Function()
For Each fn In IO.Directory.EnumerateFiles("c:\\", "*.jpg")
Yield <img src=<%= fn %>></img>
End Function.Invoke() %>
</body>
</html>
Namespace GlobalVB has always had the Namespace and Global keywords, but now you can use them together!
Namespace Global
Namespace Global.<NamespaceName>
This gives you a lot more flexibility around which namespace your code ends up in, and is particularly useful for code-generation scenarios. For a full description of the feature, check out Lucian’s excellent post here.
Optional Parameters in Overloaded MethodsPreviously, overloads of a method were not permitted if the only difference between them was optional parameters, so the following code would be invalid:
Sub f(x As Integer)
Sub f(x As Integer, Optional y As Integer = 0)
In VB11 this code is now valid, which gives you more flexibility and improves the ability to version methods (though there’s still more we need to do in this area).
Limitless (Command-line) Errors!This is actually a good thing, let me explain. For performance reasons, the Visual Basic IDE maxes out at 101 errors (with error #102 being “Maximum number of errors exceeded.”) This can make it difficult to estimate the amount of work remaining in certain situations, particularly in upgrade scenarios. We have removed this limit from the command-line compiler in this release, though it still there in the IDE. What this means is if you want to know exactly how many errors there are for a project, just invoke the compiler through msbuild.exe or vbc.exe and you’ll get your answer.
Caller Info AttributesThe compiler now recognizes three special attributes: <CallerMemberName>, <CallerLineNumber>, and <CallerFilePath>. This is great for logging scenarios, and allows you to have the name, line number, and/or file path of the invoking method passed into the logging function as optional parameters.
Another great use case for this is when implementing INotifyPropertyChanging:
Class C
Implements INotifyPropertyChanged
Dim backing As New Dictionary(Of String, Object)
Property p As String
Get
Return GetProp(Of String)()
End Get
Set(ByVal value As String)
SetProp(value)
End Set
End Property
Public Function GetProp(Of T)(<CallerMemberName()> Optional prop As String = Nothing) As T
Debug.Assert(prop IsNot Nothing)
Try
Return CType(backing(prop), T)
Catch ex As KeyNotFoundException
Return CType(Nothing, T)
End Try
End Function
Public Sub SetProp(Of T As IComparable(Of T))(value As T,
<CallerMemberName()> Optional prop As String = Nothing)
Dim oldvalue = CType(backing(prop), T)
If value.CompareTo(oldvalue) = 0 Then Return
backing(prop) = value
RaiseEvent PropertyChanged(Me, New PropertyChangedEventArgs(prop))
End Sub
Public Event PropertyChanged(sender As Object, e As PropertyChangedEventArgs) Implements INotifyPropertyChanged.PropertyChanged
End Class
Simplified Code Spit (a.k.a. No More ByVal!)The IDE will no longer insert “ByVal” in method signatures unless you explicitly type it in. This reduces a lot of the visual noise in method declarations and makes them more readable. Also, the IDE will no longer insert the fully-qualified name for a type (such as “System.Object” or “System.EventArgs”) when the applicable imports are already in scope. (Because in this case “System” is already a project-level import).
VB10:
Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
VB11:
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
This also works automatically for all interface methods generated when you press enter after an Implements clause.
View Call HierarchyIn VB11, if you right-click a method name, you’ll see a new option for “View Call Hierarchy” that brings up a window that looks like this:
It shows you all calls to/from that method, and also if it gets overridden anywhere. You can then walk the tree and understand how these methods are used throughout your codebase. For a more detailed overview, check out this post.
Getting StartedHopefully that gives you a taste of what’s new in VB11 (and we haven’t even touched on all the performance improvements!). There are far more enhancements than we can cover here, so watch for a future post that goes into more detail. For next steps:
1. Download the Bits!Download Visual Studio 11 Beta (you can also get Team Foundation Server 11 Beta, and .NET Framework 4.5 Beta from here). Visual Studio 11 Beta can be installed on Windows 7, or you can install it on top of the Windows 8 Consumer Preview.
2. Learn More!In addition to Jason’s post, Jason’s announcement, be sure to check out the new Windows 8 app developer blog for a list of great improvements across Visual Studio. For Samples, check out the Windows 8 Sample Gallery. To explore what’s new in Windows 8, check out the Building Windows 8 blog post announcing the Windows 8 Customer Preview.
3. Send us Feedback!The forums are available for questions, both in Visual Studio and Windows 8. If you find a bug please let us know through the Microsoft Connect site. For feature suggestions check out the UserVoice site (which also allows you to vote on other users’ feature ideas).
We hope you enjoy the product and look forward to seeing tons of great VB apps!
Jon Aneja, Program Manager, VB/C# Compiler TeamLucian Wischik, Program Manager, VB Spec Lead
I'm not sure if the await example makes sense: TextBlock1.Text is set *after* the await line, which means it is part of the continuation. Thus, "Retrieving File..." is printed only *after* the file has been downloaded completely...
Why the new Iterator keyword? is it because you found a lot of/important customers with problems because they used "yield" as a variable name in their codebases? Or is it because of something else?
Are the arguments decorated with the new Caller Info attributes visible to developers? or only inside the scope of the method? It would be nice if it would be the second.
PS: Sorry if this appears as a double comment, I don't know if my previous one has been submitted.
Good going. Delegates and background threads finally simplified.
Add in a WPF databinding check method like BOOL Me.Property.EnsuredIsBoundToXaml() and we're set
@Heinzi - good catch! We've updated the code sample
Héctor,.
As for the CallerInfo attributes they don't hide the parameters from consumers of the method, no. These parameters look just like normal optional parameters. This is important as an example for a method which has multiple overloads which take CallerInfo parameters. One overload should explicitly pass its arguments to other overloads to preserve the line CallerInfo of an external caller. Otherwise the innermost overload would always get the CallerInfo of whatever other Overload called it. We're looking at ways to note these parameters as more special though such as displaying "(caller name)" or the current line number as the default value instead of the one declared in source to make it clearer to developers their true nature.
Regards,
Anthony D. Green
Program Manager
Visual Basic & C# Languages Team
I would like to see nested subprocedures. A sub or function that is declared completely within the context of another sub and could only be called from within that parent sub. The parent sub's variables would be available to the child. There are so many times when I want to call a new sub that will only be called from one sub and I have to pass lots of parms to the new sub. Nested subs would allow me to write the new sub inside the parent without having to pass parms. Delphi has had this for years!
I would like to get notes about the classes in VB.net. I am still new in programming. I can be contacted at tshepomiya@gmail.com
I would like to see nested subprocedures ! (Me to!)
@JC: Delphi is a real-oop language....
Class Test
Public Sub New()
End Sub
Public Sub New(Param as string)
.......
End class
Class NotOOP
inherits Test
So: Bla as new NotOOP("Delphi is for nerds, VB is just what it is..... Basic stuff")
Jeg har besvær med at Downloade Visual Basic 11 Beta.
Although declarative style is often nice, it may also make the language a bit more verbose, which is a problem for some people out there.
Also, on the subject of properties, when you make an interface with a readonly property, C# allows its implementation to have a private set, however in VB you are forced to use ReadOnly, which may be correct at some levels, but on others it makes you add some more code than desired. I hope this won't cause similar undesired behaviour.
The Real Transparency and Opacy, on controls, stills not be possible?
How about making Option Strict On as default? I'm sure that would please many dedicated VB programmers as well as myself. And as Joaquim mentioned, Opacity property on controls would be very nice.
Opacity on controls is a Windows Forms issue, and it's unlikely we'll see any new update for it ever... a shame, because I still prefer Windows Forms over WPF when making a desktop app, but if it was already placed aside by Microsoft when WPF made its debut, now with Metro we can completely forget about it.
sorry, but what is the language limitation!?! is our imagination;)(is what everyone learn)
then why not, just, update the Transparency property(make it more faster) and with these way we can do the Opacy too(they continue be a false Transparency\Opacy, but better use it more faster than nothing);)
@JC "I would like to see nested subprocedures. .."
You can already do that; I do it all the time:
public sub foo()
dim innerSub = sub()
doSomeStuff()
end sub
innerSub()
end sub | http://blogs.msdn.com/b/vbteam/archive/2012/02/28/visual-basic-11-beta-available-for-download.aspx | CC-MAIN-2015-40 | refinedweb | 2,227 | 55.64 |
Subscribe for signals on an object.
#include <zircon/syscalls.h> zx_status_t zx_object_wait_async(zx_handle_t handle, zx_handle_t port, uint64_t key, zx_signals_t signals, uint32_t options);
zx_object_wait_async() is a non-blocking syscall which causes packets to be enqueued on port when the specified condition is met. Use
zx_port_wait() to retrieve the packets.
handle points to the object that is to be watched for changes and must be a waitable object.
The options argument can be 0 or it can be ZX_WAIT_ASYNC_TIMESTAMP which causes the system to capture a timestamp when the wait triggered.
The signals argument indicates which signals on the object specified by handle will cause a packet to be enqueued, and if any of those signals are asserted when
zx_object_wait_async() is called, or become asserted afterwards, a packet will be enqueued on port containing all of the currently-asserted signals (not just the ones listed in the signals argument). Once a packet has been enqueued the asynchronous waiting ends. No further packets will be enqueued. Note that signals are OR'd into the state maintained by the port thus you may see any combination of requested signals when
zx_port_wait() returns.
zx_port_cancel() will terminate the operation and if a packet was in the queue on behalf of the operation, that packet will be removed from the queue.
If handle is closed, the operation will also be terminated, but packets already in the queue are not affected.
Packets generated via this syscall will have type set to ZX_PKT_TYPE_SIGNAL_ONE and the union is of type
zx_packet_signal_t:
typedef struct zx_packet_signal { zx_signals_t trigger; zx_signals_t observed; uint64_t count; zx_time_t timestamp; // depends on ZX_WAIT_ASYNC_TIMESTAMP uint64_t reserved1; } zx_packet_signal_t;
trigger is the signals used in the call to
zx_object_wait_async(), observed is the signals actually observed, count is a per object defined count of pending operations and timestamp is clock-monotonic time when the object state transitioned to meet the trigger condition. If options does not include ZX_WAIT_ASYNC_TIMESTAMP the timestamp is reported as 0.
Use the
zx_port_packet_t's key member to track what object this packet corresponds to and therefore match count with the operation.
handle must have ZX_RIGHT_WAIT.
port must be of type ZX_OBJ_TYPE_PORT and have ZX_RIGHT_WRITE.
zx_object_wait_async() returns ZX_OK if the subscription succeeded.
ZX_ERR_INVALID_ARGS options is not 0 or ZX_WAIT_ASYNC_TIMESTAMP.
ZX_ERR_BAD_HANDLE handle is not a valid handle or port is not a valid handle.
ZX_ERR_WRONG_TYPE port is not a Port handle.
ZX_ERR_ACCESS_DENIED handle does not have ZX_RIGHT_WAIT or port does not have ZX_RIGHT_WRITE.
ZX_ERR_NOT_SUPPORTED handle is a handle that cannot be waited on.
ZX_ERR_NO_MEMORY Failure due to lack of memory. There is no good way for userspace to handle this (unlikely) error. In a future build this error will no longer occur.
See signals for more information about signals and their terminology.
zx_object_wait_many()
zx_object_wait_one()
zx_port_cancel()
zx_port_queue()
zx_port_wait() | https://fuchsia.googlesource.com/fuchsia/+/master/docs/reference/syscalls/object_wait_async.md | CC-MAIN-2020-05 | refinedweb | 457 | 54.63 |
import java.util.*;
public class apples {
public static void main(String[] args) {
final Formatter x;
try{
x = new Formatter("fred.txt");
System.out.println("you created a file");
}
catch(Exception e){
System.out.println("you got an error");
}
}
}
Select allOpen in new window
© 1996-2022 Experts Exchange, LLC. All rights reserved. Covered by US Patent
That's waht API writes:
fileName - The name of the file to use as the destination of this formatter. If the file exists then it will be truncated to zero size; otherwise, a new file will be created. The output will be written to the file and is buffered. | https://www.experts-exchange.com/questions/27185747/How-do-you-use-the-Formatter-class-to-create-a-text-file.html | CC-MAIN-2022-40 | refinedweb | 104 | 58.28 |
Clearly, the stars are aligned for me to build a Membase cluster exerciser, in nodejs, and blog about it along the way. I certainly hope to learn a lot about nodejs by doing this, and I hope to impart some practical knowledge to you about Membase.
Membase allows users to connect using either of the two memcached
protocols: ascii or binary.
TAP streams, however, are memcached binary protocol only. Step one, then, is to ensure that nodejs can support raw binary TCP communication, which it certainly can through its binary buffer encodings. However, nodejs seems to be missing the classic ntohl and htonl functions out of the box that are needed for any binary protocol manipulations.
As a gentle start, then, here's a first contribution to the nodejs world, which is a nodejs implementation of the ntohl/htonl.
*/
exports.htons = function(b, i, v) {
b[i] = (0xff & (v >> 8));
b[i+1] = (0xff & (v));
}
exports.ntohs = function(b, i) {
return ((0xff & b[i + 0]) << 8) |
((0xff & b[i + 1]));
}
exports.ntohsStr = function(s, i) {
return ((0xff & s.charCodeAt(i + 0)) << 8) |
((0xff & s.charCodeAt(i + 1))); }
exports.htonl = function(b, i, v) {
b[i+0] = (0xff & (v >> 24));
b[i+1] = (0xff & (v >> 16));
b[i+2] = (0xff & (v >> 8));
b[i+3] = (0xff & (v));
}
exports.ntohl = function(b, i) {
return ((0xff & b[i + 0]) << 24) |
((0xff & b[i + 1]) << 16) |
((0xff & b[i + 2]) << 8) |
((0xff & b[i + 3]));
}
exports.ntohlStr = function(s, i) {
return ((0xff & s.charCodeAt(i + 0)) << 24) |
((0xff & s.charCodeAt(i + 1)) << 16) |
((0xff & s.charCodeAt(i + 2)) << 8) |
((0xff & s.charCodeAt(i + 3))); }
-----------------------------------------------------------------------
This JavaScript stuff might actually go places. Above, a "b" parameter may either be an Array of octets or a nodejs Buffer. A "s" parameter is a JavaScript string. "i" is a zero-based index into an Array/Buffer/String.
More's coming soon, where I'll next show how to create a connection to Membase in nodejs and encode and decode binary protocol messages to Membase. | http://blog.couchbase.com/starting-membase-nodejs | CC-MAIN-2016-07 | refinedweb | 337 | 67.15 |
Feather
We often read articles on the topic Python vs. R, which language should I pick? My opinion is that each language, and its corresponding ecosystem, has its pros and cons and can be used efficiently to solve different problems.
Wes McKinney and Hadley Wickham seem to agree on this point and have recently developed in strong collaboration the Feather packages (one in Python and one in R at this time, but it could / will be extended to other languages).
It is designed to make reading and writing data frames efficient, and to make sharing data across data analysis languages easy.
Stop talking, it's time to experiment. Here is a “hello world” of writing a data frame in R and reading it back in Python (pandas).
library(feather) data("mtcars") path <- "/tmp/mtcars.feather" write_feather(mtcars, path)
Now we can read it in Python and start playing with
mtcars data ;–)
import feather path = '/tmp/mtcars.feather' df = feather.read_dataframe(path) df.head(3) ## mpg cyl disp hp drat wt qsec vs am gear carb ## 0 21.0 6.0 160.0 110.0 3.90 2.620 16.46 0.0 1.0 4.0 4.0 ## 1 21.0 6.0 160.0 110.0 3.90 2.875 17.02 0.0 1.0 4.0 4.0 ## 2 22.8 4.0 108.0 93.0 3.85 2.320 18.61 1.0 1.0 4.0 1.0
A last world, Feather uses the Apache Arrow columnar memory specification, but at this time it should not be used for long-term storage since it is likely to change.
Note to users: Feather should be treated as alpha software. In particular, the file format is likely to evolve over the coming year. Do not use Feather for long-term data storage. | https://teletype.in/@romain/HyB358mh7 | CC-MAIN-2021-39 | refinedweb | 306 | 72.97 |
Merb on AIR - Drag and Drop Multiple File Upload
Merb was originally created by Ezra Zygmuntowicz to avoid some Rails upload issues.
This is one of the things that Merb was written for. Rails doesn‘t allow multiple concurrent file uploads at once without blocking an entire rails backend for each file upload. Merb allows multiple file uploads at once.
I’ve built ‘multiple’ file uploaders for Rails sites but they always involved some slight of hand, the files appeared to be uploading all at once but they where actually queued up by Flex then handled one by one by the app (which also had the unhappy side effect of blocking any other requests to that process). I’ve been wanting to try out Adobe AIR’s file system drag and drop for a while so this is a two-fer example. You’ll need the beta version of Flex Builder 3 or the Flex 3 SDK beta if you don’t mind getting down with the command line.
In a hurry? Here’s one I made earlier (flex source in ‘).
If you haven’t before install Merb
Then create a new Merb app
and dive on in
We’ll need two folders not in a Merb skeleton, one for Flex, and one for our uploads
Create a local database called ‘merb_air_upload’ and edit dist/conf/merb_init.rb so that the database definition matches your setup
# set your db info here ActiveRecord::Base.establish_connection( :adapter => ‘mysql‘, :username => ‘root‘, :password => ‘‘, :database => ‘merb_air_upload‘ )
Our model will be called ‘UserFile’ as ‘Upload’ and ‘File’ are reserved words, create a migration
and edit it (dist/schema/migrations/002_create_user_files.rb) to look like so
:user_files do |t| t.column :filename, :string, :null => false end end drop_table :user_files end end create_table
rake your db
Then create a UserFile model (dist/app/models/user_file.rb)
end
and an upload controller (dist/controllers/upload.rb)
# for testing check jer terminal puts params.inspect # new user file object @upload = UserFile.new @upload.filename = params[:Filename] # save if @upload.save # create directories dist_root = Merb::Server.config[:dist_root] FileUtils.mkdir dist_root + “/public/uploads/“ # move destination = dist_root + “/public/uploads/ /“ FileUtils.mv params[:Filedata][:tempfile].path, destination else false end #render_no_layout end end
That’s it for the Merb side of thing on to our AIR app, fire up Flex Builder and create a new AIR project
I like to keep my flex files in my Rails/Merb app directory
The AIR app is three files; the main MXML file (dist/app/fx/merb_air_upload.mxml), a code behind class (dist/app/fx/com/vixiom/merb_air_upload/App.as), and an upload progress component that gets repeated for each file (dist/app/fx/com/vixiom/merb_air_upload/UploadProgressComponent.mxml). Here’s the main MXML file:
It’s hooked up to it’s code behind ‘App.as’ class by the xmlns tag) App.as extends WindowedApplication which is the base of all AIR apps:
package com.vixiom.merb_air_upload { import com.vixiom.merb_air_upload.UploadProgressComponent import mx.core.WindowedApplication; import mx.containers.VBox; import mx.controls.Button; import mx.events.FlexEvent; import flash.events.*; import flash.desktop.*; import flash.filesystem.File; import flash.net.*; public { private var filesToUpload :Array private var UploadProgressComponents :Array; public var files_vb:VBox; public var upload_btn:Button; private var uploadURL:URLRequest; /* * Constructor */ public :void { addEventListener( FlexEvent.CREATION_COMPLETE, creationCompleteHandler ); } /* * creationComplete * * called when the AIR has finishe loading, sets up drag/drop event listeners reference objects * */ private :void { addEventListener( NativeDragEvent.NATIVE_DRAG_ENTER, onDragEnter ); addEventListener( NativeDragEvent.NATIVE_DRAG_DROP, onDragDrop ); upload_btn.enabled = false; upload_btn.addEventListener( MouseEvent.CLICK, upload ); uploadURL = new URLRequest(); uploadURL.url = ““; uploadURL.method=URLRequestMethod.POST; filesToUpload = new Array(); UploadProgressComponents = new Array(); } /* * onDragEnter * * files have been dragged into the app */ private :void { DragManager.acceptDragDrop(this); } /* * onDragDrop * * when files are dropped… */ private :void { DragManager.dropAction = DragActions.COPY; var files:Array = event.transferable.dataForFormat( TransferableFormats.FILE_LIST_FORMAT ) as Array; for each (var f:File in files) { addFile( FileReference( f ) ); } upload_btn.enabled = true; } /* * addFile * * …add then to filesToUpload array, and the file upload listeners, * and create a progress component for each file */ private :void { filesToUpload.push( f ); var upv:UploadProgressComponent = new UploadProgressComponent(); UploadProgressComponents.push( upv ); files_vb.addChild( upv ); upv.file_lb.text = f.name; upv.pb.source = f; f.addEventListener( Event.COMPLETE, completeHandler ); f.addEventListener( IOErrorEvent.IO_ERROR, ioErrorHandler ); } /* * completeHandler * * a file upload is complete, remove it from filesToUpload * and remove the upload component */ private :void { var f:FileReference = FileReference(e.target); for( var i:uint; i < filesToUpload.length; i++ ) { if( f.name == filesToUpload[i].name ) { files_vb.removeChild( UploadProgressComponents[i] ); filesToUpload.splice(i, 1); UploadProgressComponents.splice(i, 1); } } } /* * trace any errors */ private :void { trace(“ioErrorHandler: “ + event); } /* * upload! */ private :void { for each (var f:File in filesToUpload) { f.upload( uploadURL ); } } } }
The last file is the upload progress component, it’s progress bar listens for events from each file (upv.pb.source = f; above in the addFile method)
That’s it! test your AIR app by dragging some files from the file system, once you drop them the upload progress components show a visual representation of the files, click ‘upload files’ and the files are upload all at once (for real real not for play play this time).
drag!
drop!
upload!
Awesome. I have been wanting to play with Flex and Merb and now you have given me some gound to work on. Cheers
Comment by K. Adam Christensen — June 29, 2007 @ 5:37 am
Very cool. It’s awesome to see Merb getting more use. :)
Comment by Jeremy McAnally — June 29, 2007 @ 7:12 am
Hey, u once fancy jerself w/PHP…….would love to see this on php, or on the same realm CAKEPHP? :) any chance?
Comment by tomo_atlacatl — July 9, 2007 @ 3:43 pm
Tomo I know you’re smart enough to make a PHP version, the upload script is ‘cake’ so it could be done in Cake. How about a Strata version :P
Comment by Alastair — July 9, 2007 @ 7:09 pm
[…] I also ran through the Merb/AIR tutorial here: Merb on AIR - Drag and Drop Multiple File Upload. […]
Pingback by dirtystylus » Blog Archive » Locomotive on Rails, with MAMP as a Caboose — July 10, 2007 @ 12:02 pm
Yeah, I know…. I just wanted to tempt ya back to the php side :P. Strata, heh, haven’t touch dat in a while.
Comment by tomo_atlacatl — July 10, 2007 @ 2:16 pm
[…] Check out this AIR tutorial for Drag and Drop Multiple File Upload using Merb and AIR.You need Flex builder 3 Beta for this. more @ […]
Pingback by Drag and Drop Multiple File Upload with Merb and AIR | Adobe AIR Tutorials — July 26, 2007 @ 11:29 am
Great example, much appreciated and well done.
Comment by Evan Gifford — August 9, 2007 @ 2:50 pm
thanks Evan,
Let me know when flexheads.com is launched, I use Cairngorm on some larger projects but still haven’t wrapped my brain completely around it.
Comment by Alastair — August 9, 2007 @ 4:26 pm
Update for flex3 beta 2, change the line in onDrop from:
var files:Array = event.transferable.dataForFormat( TransferableFormats.FILE_LIST_FORMAT ) as Array;
to:
var dropfiles:Array = event.clipboard.dataForFormat(flash.desktop.ClipboardFormats.FILE_LIST_FORMAT) as Array;
and add
import flash.desktop.ClipboardFormats;
Comment by paul — October 2, 2007 @ 11:43 am
And if I double checked before posting I could fix my code to match the code here.
var files:Array = event.transferable.dataForFormat( TransferableFormats.FILE_LIST_FORMAT ) as Array;
to:
var files:Array = event.clipboard.dataForFormat(flash.desktop.ClipboardFormats.FILE_LIST_FORMAT) as Array;
Comment by paul — October 2, 2007 @ 11:50 am
Hi! Looks like great work! ;-) Is it possible to combine this with a rails app? I’d like to just use merb for uploading, so I’d like to be able to open the uploader from my rails app. Can anybody give me some hints on how to do that? Another thing I’m curious about is how the merb and the air application communicate with each other - I’m relatively new to web development. Also does anybody know about how to add a button to the AIR app for adding files for uploading, using a “traditional” file dialogue? And are there any problems with deploying a rails app that also has a MERB part?
Comment by Nico — October 10, 2007 @ 1:26 am
Very nice.
Can this work with an HTTPS upload URL?
Thanks, Gerry
Comment by Gerry McLarnon — October 24, 2007 @ 2:05 am
Gerry,
I haven’t tried it with HTTPS, I know with Flex the Flash player there are special security requirements when using HTTPS not sure if that also applies to AIR.
Comment by Alastair — October 24, 2007 @ 9:16 am
doh that link got cut off the HTML anchor should include the parenthesis after ‘allowDomain’
Comment by Alastair — October 24, 2007 @ 9:19 am
Ruby is the programming language under which the Ruby on Rails framework runs. Merb is not a Ruby on Rails gem but a Ruby gem. Merb is a MVC web framework similar to Rails. Please do not confuse the two, Ruby is the language, Rails is the framework.
Comment by Michael Guterl — December 15, 2007 @ 1:58 pm
Thank you fricking captain of the programming police.
Comment by Alastair — December 16, 2007 @ 10:37 am
Hi,
I’m having a little trouble when importing the ‘In a hurry’ version into Flex as an Existing Project.
I instantly receive these errors having made no changes to the code:
1119: Access of possibly undefined property transferable through a reference with static type flash.events:NativeDragEvent.
1120: Access of undefined property DragActions.
1120: Access of undefined property DragManager.
1120: Access of undefined property DragManager.
1120: Access of undefined property TransferableFormats.
I feel as though I am missing something very simple here, any help would be greatly appreciated!
D.Kumar
Comment by Dipen Kumar — January 4, 2008 @ 8:46 am
Hey,
It looks like the Flex Drag classes have changed in Flex Beta 3 (this was made with Beta 2), Merb has also changed since this post, such is the life of living on the egde!
I’ve posted a fix here
BTW your the first to see actionsnip.com as I just released it yesterday :)
- Alastair
Comment by Alastair — January 4, 2008 @ 10:48 am
Hi Alastair,
Thank you for the swift response, my Flex project with the drag and drop feature now works very nicely :)
However… the problem I have now is that MERB is not allowing me to perform a db:migrate. I’ve checked the methods inside the MERB app and the db:migrate method does not exist. I have tried various things to run db:migrate but have not been successful.
Again, any help would be greatly appreciated. This is the last step that is needed to complete the project for me.
Oh, also, I’m impressed with your website, looks like a useful resource for me.
D.Kumar
Comment by Dipen Kumar — January 8, 2008 @ 2:30 am
Okay, last one I promise!
Here we go…I figured out why the rake db:migrate was not working. The reason was because I had forgotten about the configuring of a database with MERB as MERB does not come with database support. Anyway, MERB connects fine with the database but the rake db:migrate does not work because it cannot find the schema.rb file even though it exists in dist/schema/schema.rb.
Thoughts, suggestions, plain old pointing out the obvious would help me so much,
D.Kumar
Comment by Dipen Kumar — January 8, 2008 @ 7:31 am
Hey, does this solution work with Files bigger than 100MB?
Thanks,
Marcel
Comment by Marcel Fahle — January 28, 2008 @ 4:37 am
Hi Marcel,
I’m not sure if this applies to AIR apps but I know Flash file upload only handles files less than 100mb.
Comment by Alastair — January 28, 2008 @ 8:50 am
could u send code for upload videos files in flex and send its data to rails ?
Comment by abhishek — January 28, 2008 @ 11:34 pm
[…] and Drop Multiple File UploadHere is the tutorials about Drag and Drop Multiple File Upload.It’s actually using Flex 3 and Adobe AIR.The server side is using the Ruby on Rails gem Merb to […]
Pingback by The list of mine top 3 Flex File Upload Component - Ntt.cc — February 2, 2008 @ 4:47 pm
[…] Merb on AIR - Drag and Drop Multiple File Upload - A slightly old (June 2007) tutorial demonstrating how to use Merb alongside an Adobe AIR powered client to handle file uploads. […]
Pingback by 21 Merb Links, Tutorials and Other Resources — February 4, 2008 @ 8:20 pm
[…] […]
Pingback by Philip Arkcoll » Flex AIR Drop Box Prototype — April 1, 2008 @ 1:49 pm
[…] And raw byte access is enabling everything from image and video encoding, to email clients to ftp clients, just weeks after AIR is launched. And heck, I nearly forgot about the online / offline […]
Pingback by Adobe AIR is… | Psyked — April 13, 2008 @ 3:02 am
hi
how can i intregate the air aplication with php to upload files to a ftp server
Comment by Nuno — April 22, 2008 @ 3:39 am
Hello,
Thank you very much for posting the code for this app. Would it be tough to modify this air-application to be a regular flex application? Thanks.
Comment by archie — April 25, 2008 @ 8:30 am
[…] Drag and Drop Flex File Upload with Ruby on Rails […]
Pingback by 9 Flex File Upload Examples Visited — May 8, 2008 @ 6:05 am | http://blog.vixiom.com/2007/06/29/merb-on-air-drag-and-drop-multiple-file-upload/ | crawl-001 | refinedweb | 2,242 | 55.74 |
import "github.com/mna/pigeon/test/alternate_entrypoint"
Parse parses the data from b using filename as information in the error messages.
ParseFile parses the file identified by filename.
ParseReader parses the data from r using filename as information in the error messages.
Cloner is implemented by any value that has a Clone method, which returns a copy of the value. This is mainly used for types which are not passed by value (e.g map, slice, chan) or structs that contain such types.
This is used in conjunction with the global state feature to create proper copies of the state to allow the parser to properly restore the state in the case of backtracking.
Option is a function that can set an option on the parser. It returns the previous setting as an Option.
AllowInvalidUTF8 creates an Option to allow invalid UTF-8 bytes. Every invalid UTF-8 byte is treated as a utf8.RuneError (U+FFFD) by character class matchers and is matched by the any matcher. The returned matched value, c.text and c.offset are NOT affected.
The default is false.
Debug creates an Option to set the debug flag to b. When set to true, debugging information is printed to stdout while parsing.
The default is false.
Entrypoint creates an Option to set the rule name to use as entrypoint. The rule name must have been specified in the -alternate-entrypoints if generating the parser with the -optimize-grammar flag, otherwise it may have been optimized out. Passing an empty string sets the entrypoint to the first rule in the grammar.
The default is to start parsing at the first rule in the grammar.
GlobalStore creates an Option to set a key to a certain value in the globalStore.
InitState creates an Option to set a key to a certain value in the global "state" store.
MaxExpressions creates an Option to stop parsing after the provided number of expressions have been parsed, if the value is 0 then the parser will parse for as many steps as needed (possibly an infinite number).
The default for maxExprCnt is 0..
Recover creates an Option to set the recover flag to b. When set to true, this causes the parser to recover from panics and convert it to an error. Setting it to false can be useful while debugging to access the full stack trace.
The default is true.
Statistics adds a user provided Stats struct to the parser to allow the user to process the results after the parsing has finished. Also the key for the "no match" counter is set.
Example usage:
input := "input" stats := Stats{} _, err := Parse("input-file", []byte(input), Statistics(&stats, "no match")) if err != nil { log.Panicln(err) } b, err := json.MarshalIndent(stats.ChoiceAltCnt, "", " ") if err != nil { log.Panicln(err) } fmt.Println(string(b))
type Stats struct { // ExprCnt counts the number of expressions processed during parsing // This value is compared to the maximum number of expressions allowed // (set by the MaxExpressions option). ExprCnt uint64 // ChoiceAltCnt is used to count for each ordered choice expression, // which alternative is used how may times. // These numbers allow to optimize the order of the ordered choice expression // to increase the performance of the parser // // The outer key of ChoiceAltCnt is composed of the name of the rule as well // as the line and the column of the ordered choice. // The inner key of ChoiceAltCnt is the number (one-based) of the matching alternative. // For each alternative the number of matches are counted. If an ordered choice does not // match, a special counter is incremented. The name of this counter is set with // the parser option Statistics. // For an alternative to be included in ChoiceAltCnt, it has to match at least once. ChoiceAltCnt map[string]map[string]int }
Stats stores some statistics, gathered during parsing
Package altentry imports 13 packages (graph). Updated 2019-08-12. Refresh now. Tools for package owners. | https://godoc.org/github.com/mna/pigeon/test/alternate_entrypoint | CC-MAIN-2019-51 | refinedweb | 654 | 66.23 |
Fuzzing Interface¶
The fuzzing interface is glue code living in mozilla-central in order to make it easier for developers and security researchers to test C/C++ code with either libFuzzer or afl-fuzz.
These fuzzing tools, are based on compile-time instrumentation to measure things like branch coverage and more advanced heuristics per fuzzing test. Doing so allows these tools to progress through code with little to no custom logic/knowledge implemented in the fuzzer itself. Usually, the only thing these tools need is a code “shim” that provides the entry point for the fuzzer to the code to be tested. We call this additional code a fuzzing target and the rest of this manual describes how to implement and work with these targets.
As for the tools used with these targets, we currently recommend the use of libFuzzer over afl-fuzz, as the latter is no longer maintained while libFuzzer is being actively developed. Furthermore, libFuzzer has some advanced instrumentation features (e.g. value profiling to deal with complicated comparisons in code), making it overall more effective.
What can be tested?¶
The interface can be used to test all C/C++ code that either ends up in
libxul (more precisely, the gtest version of
libxul) or is
part of the JS engine.
Note that this is not the right testing approach for testing the full browser as a whole. It is rather meant for component-based testing (especially as some components cannot be easily separated out of the full build).
Note
Note: If you are working on the JS engine (trying to reproduce a bug or seeking to develop a new fuzzing target), then please also read the JS Engine Specifics Section at the end of this documentation, as the JS engine offers additional options for implementing and running fuzzing targets.
Reproducing bugs for existing fuzzing targets¶
If you are working on a bug that involves an existing fuzzing interface target, you have two options for reproducing the issue:
Using existing builds¶
We have several fuzzing builds in CI that you can simply download. We recommend
using
fuzzfetch for this purpose, as it makes downloading and unpacking
these builds much easier.
You can install
fuzzfetch from
Github or
via pip.
Afterwards, you can run
$ python -m fuzzfetch -a --fuzzing --gtest -n firefox-fuzzing
to fetch the latest optimized build. Alternatively, we offer non-ASan debug builds which you can download using
$ python -m fuzzfetch -d --fuzzing --gtest -n firefox-fuzzing
In both commands,
firefox-fuzzing indicates the name of the directory that
will be created for the download.
Afterwards, you can reproduce the bug using
$ FUZZER=TargetName firefox-fuzzing/firefox test.bin
assuming that
TargetName is the name of the fuzzing target specified in the
bug you are working on and
test.bin is the attached testcase.
Note
Note: You should not export the
FUZZER variable permanently
in your shell, especially if you plan to do local builds. If the
FUZZER
variable is exported, it will affect the build process.
If the CI builds don’t meet your requirements and you need a local build instead, you can follow the steps below to create one:
Local build requirements and flags¶
You will need a Linux environment with a recent Clang. Using the Clang downloaded
by
./mach bootstrap or a newer version is recommended.
The only build flag required to enable the fuzzing targets is
--enable-fuzzing,
so adding
ac_add_options --enable-fuzzing
to your
.mozconfig is already sufficient for producing a fuzzing build.
However, for improved crash handling capabilities and to detect additional errors,
it is strongly recommended to combine libFuzzer with AddressSanitizer
by adding
ac_add_options --enable-address-sanitizer
at least for optimized builds and bugs requiring ASan to reproduce at all (e.g. you are working on a bug where ASan reports a memory safety violation of some sort).
Once your build is complete, you must additionally run
$ ./mach gtest dontruntests
to force the gtest libxul to be built.
Note
Note: If you modify any code, please ensure that you run both build
commands to ensure that the gtest libxul is also rebuilt. It is a common mistake
to only run
./mach build and miss the second command.
Once these steps are complete, you can reproduce the bug locally using the same steps as described above for the downloaded builds.
Developing new fuzzing targets¶
Developing a new fuzzing target using the fuzzing interface only requires a few steps.
Determine if the fuzzing interface is the right tool¶
The fuzzing interface is not suitable for every kind of testing. In particular if your testing requires the full browser to be running, then you might want to look into other testing methods.
The interface uses the
ScopedXPCOM implementation to provide an environment
in which XPCOM is available and initialized. You can initialize further subsystems
that you might require, but you are responsible yourself for any kind of
initialization steps.
There is (in theory) no limit as to how far you can take browser initialization. However, the more subsystems are involved, the more problems might occur due to non-determinism and loss of performance.
If you are unsure if the fuzzing interface is the right approach for you or you require help in evaluating what could be done for your particular task, please don’t hestitate to contact us.
Develop the fuzzing code¶
Where to put your fuzzing code¶
The code using the fuzzing interface usually lives in a separate directory
called
fuzztest that is on the same level as gtests. If your component
has no gtests, then a subdirectory either in tests or in your main directory
will work. If such a directory does not exist yet in your component, then you
need to create one with a suitable
moz.build. See the transport target
for an example
In order to include the new subdirectory into the build process, you will
also have to modify the toplevel
moz.build file accordingly. For this
purpose, you should add your directory to
TEST_DIRS only if
FUZZING_INTERFACES
is set. See again the transport target for an example.
How your code should look like¶
In order to define your fuzzing target
MyTarget, you only need to implement 2 functions:
A one-time initialization function.
At startup, the fuzzing interface calls this function once, so this can be used to perform one-time operations like initializing subsystems or parsing extra fuzzing options.
This function is the equivalent of the LLVMFuzzerInitialize function and has the same signature. However, with our fuzzing interface, it won’t be resolved by its name, so it can be defined
staticand called whatever you prefer. Note that the function should always
return 0and can (except for the return), remain empty.
For the sake of this documentation, we assume that you have
static int FuzzingInitMyTarget(int* argc, char*** argv);
The fuzzing iteration function.
This is where the actual fuzzing happens, and this function is the equivalent of LLVMFuzzerTestOneInput. Again, the difference to the fuzzing interface is that the function won’t be resolved by its name. In addition, we offer two different possible signatures for this function, either
static int FuzzingRunMyTarget(const uint8_t* data, size_t size);
or
static int FuzzingRunMyTarget(nsCOMPtr<nsIInputStream> inputStream);
The latter is just a wrapper around the first one for implementations that usually work with streams. No matter which of the two signatures you choose to work with, the only thing you need to implement inside the function is the use of the provided data with your target implementation. This can mean to simply feed the data to your target, using the data to drive operations on the target API, or a mix of both.
While doing so, you should avoid altering global state in a permanent way, using additional sources of data/randomness or having code run beyond the lifetime of the iteration function (e.g. on another thread), for one simple reason: Coverage-guided fuzzing tools depend on the deterministic nature of the iteration function. If the same input to this function does not lead to the same execution when run twice (e.g. because the resulting state depends on multiple successive calls or because of additional external influences), then the tool will not be able to reproduce its fuzzing progress and perform badly. Dealing with this restriction can be challenging e.g. when dealing with asynchronous targets that run multi-threaded, but can usually be managed by synchronizing execution on all threads at the end of the iteration function. For implementations accumulating global state, it might be necessary to (re)initialize this global state in each iteration, rather than doing it once in the initialization function, even if this costs additional performance.
Note that unlike the vanilla libFuzzer approach, you are allowed to
return 1in this function to indicate that an input is “bad”. Doing so will cause libFuzzer to discard the input, no matter if it generated new coverage or not. This is particularly useful if you have means to internally detect and catch bad testcase behavior such as timeouts/excessive resource usage etc. to avoid these tests to end up in your corpus.
Once you have implemented the two functions, the only thing remaining is to register them with the fuzzing interface. For this purpose, we offer two macros, depending on which iteration function signature you used. If you sticked to the classic signature using buffer and size, you can simply use
#include "FuzzingInterface.h" // Your includes and code MOZ_FUZZING_INTERFACE_RAW(FuzzingInitMyTarget, FuzzingRunMyTarget, MyTarget);
where
MyTarget is the name of the target and will be used later to decide
at runtime which target should be used.
If instead you went for the streaming interface, you need a different include, but the macro invocation is quite similar:
#include "FuzzingInterfaceStream.h" // Your includes and code MOZ_FUZZING_INTERFACE_STREAM(FuzzingInitMyTarget, FuzzingRunMyTarget, MyTarget);
For a live example, see also the implementation of the STUN fuzzing target.
Add instrumentation to the code being tested¶
libFuzzer requires that the code you are trying to test is instrumented
with special compiler flags. Fortunately, adding these on a per-directory basis
can be done just by including the following directive in each
moz.build
file that builds code under test:
# Add libFuzzer configuration directives include('/tools/fuzzing/libfuzzer-config.mozbuild')
The include already does the appropriate configuration checks to be only active in fuzzing builds, so you don’t have to guard this in any way.
Note
Note: This include modifies CFLAGS and CXXFLAGS accordingly
but this only works for source files defined in this particular
directory. The flags are not propagated to subdirectories automatically
and you have to ensure that each directory that builds source files
for your target has the include added to its
moz.build file.
By keeping the instrumentation limited to the parts that are actually being tested using this tool, you not only increase the performance but also potentially reduce the amount of noise that libFuzzer sees.
Build your code¶
See the Build instructions above for instructions
how to modify your
.mozconfig to create the appropriate build.
Running your code and building a corpus¶
You need to set the following environment variable to enable running the fuzzing code inside Firefox instead of the regular browser.
FUZZER=name
Where
name is the name of your fuzzing module that you specified
when calling the
MOZ_FUZZING_INTERFACE_RAW macro. For the example
above, this would be
MyTarget or
StunParser for the live example.
Now when you invoke the firefox binary in your build directory with the
-help=1 parameter, you should see the regular libFuzzer help. On
Linux for example:
$ FUZZER=StunParser obj-asan/dist/bin/firefox -help=1
You should see an output similar to this:
Running Fuzzer tests... Usage: To run fuzzing pass 0 or more directories. obj-asan/dist/bin/firefox [-flag1=val1 [-flag2=val2 ...] ] [dir1 [dir2 ...] ] To run individual tests without fuzzing pass 1 or more files: obj-asan/dist/bin/firefox [-flag1=val1 [-flag2=val2 ...] ] file1 [file2 ...] Flags: (strictly in form -flag=value) verbosity 1 Verbosity level. seed 0 Random seed. If 0, seed is generated. runs -1 Number of individual test runs (-1 for infinite runs). max_len 0 Maximum length of the test input. If 0, libFuzzer tries to guess a good value based on the corpus and reports it. ...
Reproducing a Crash¶
In order to reproduce a crash from a given test file, simply put the file as the only argument on the command line, e.g.
$ FUZZER=StunParser obj-asan/dist/bin/firefox test.bin
This should reproduce the given problem.
FuzzManager and libFuzzer¶
Our FuzzManager project comes with a harness for running libFuzzer with an optional connection to a FuzzManager server instance. Note that this connection is not mandatory, even without a server you can make use of the local harness.
You can find the harness here.
An example invocation for the harness to use with StunParser could look like this:
FUZZER=StunParser python /path/to/afl-libfuzzer-daemon.py --fuzzmanager \ --stats libfuzzer-stunparser.stats --libfuzzer-auto-reduce-min 500 --libfuzzer-auto-reduce 30 \ --tool libfuzzer-stunparser --libfuzzer --libfuzzer-instances 6 obj-asan/dist/bin/firefox \ -max_len=256 -use_value_profile=1 -rss_limit_mb=3000 corpus-stunparser
What this does is
run libFuzzer on the
StunParsertarget with 6 parallel instances using the corpus in the
corpus-stunparserdirectory (with the specified libFuzzer options such as
-max_lenand
-use_value_profile)
automatically reduce the corpus and restart if it grew by 30% (and has at least 500 files)
use FuzzManager (need a local
.fuzzmanagerconfand a
firefox.fuzzmanagerconfbinary configuration as described in the FuzzManager manual) and submit crashes as
libfuzzer-stunparsertool
write statistics to the
libfuzzer-stunparser.statsfile
JS Engine Specifics¶
The fuzzing interface can also be used for testing the JS engine, in fact there are two separate options to implement and run fuzzing targets:
Implementing in C++¶
Similar to the fuzzing interface in Firefox, you can implement your target in entirely C++ with very similar interfaces compared to what was described before.
There are a few minor differences though:
All of the fuzzing targets live in js/src/fuzz-tests.
All of the code is linked into a separate binary called fuzz-tests, similar to how all JSAPI tests end up in jsapi-tests. In order for this binary to be built, you must build a JS shell with
--enable-fuzzingand
--enable-tests. Again, this can and should be combined with AddressSanitizer for maximum effectiveness. This also means that there is no need to (re)build gtests when dealing with a JS fuzzing target and using a shell as part of a full browser build.
The harness around the JS implementation already provides you with an initialized
JSContextand global object. You can access these in your target by declaring
extern JS::PersistentRootedObject gGlobal;
and
extern JSContext* gCx;
but there is no obligation for you to use these.
For a live example, see also the implementation of the StructuredCloneReader target.
Implementing in JS¶
In addition to the C++ targets, you can also implement targets in JavaScript
using the JavaScript Runtime (JSRT) fuzzing approach. Using this approach is
not only much simpler (since you don’t need to know anything about the
JSAPI or engine internals), but it also gives you full access to everything
defined in the JS shell, including handy functions such as
timeout().
Of course, this approach also comes with disadvantages: Calling into JS and performing the fuzzing operations there costs performance. Also, there is more chance for causing global side-effects or non-determinism compared to a fairly isolated C++ target.
As a rule of thumb, you should implement the target in JS if
you don’t know C++ and/or how to use the JSAPI (after all, a JS fuzzing target is better than none),
your target is expected to have lots of hangs/timeouts (you can catch these internally),
or your target is not isolated enough for a C++ target and/or you need specific JS shell functions.
There is an example target in-tree that shows roughly how to implement such a fuzzing target.
To run such a target, you must run the
js (shell) binary instead of the
fuzz-tests binary and point the
FUZZER variable to the file containing
your fuzzing target, e.g.
$ FUZZER=/path/to/jsrtfuzzing-example.js obj-asan/dist/bin/js --fuzzing-safe --no-threads -- <libFuzzer options here>
More elaborate targets can be found in js/src/fuzz-tests/.
Troubleshooting¶
Fuzzing Interface: Error: No testing callback found¶
This error means that the fuzzing callback with the name you specified
using the
FUZZER environment variable could not be found. Reasons
for are typically either a misspelled name or that your code wasn’t
built (check your
moz.build file and build log). | https://firefox-source-docs.mozilla.org/tools/fuzzing/fuzzing_interface.html | CC-MAIN-2021-25 | refinedweb | 2,780 | 51.38 |
Change the contract of GetStyle so that it returns an error when an error occurs (i.e. when it writes to stderr), and only returns the fallback style when it can't find a configuration file.
Details
Diff Detail
Event Timeline
This change works, and passes all tests; however, I have a bunch of questions, which you'll find in the diffs. I look forward to your feedback. Thanks!
One more thing I forgot to mention is that this change comes from a discussion we had on this other change: where @djasper agreed that "fallback-style should only be used when there is no .clang-format file. If we find one, and it doesn't parse correctly, we should neither use the fallback style nor scan in higher-level directories (not sure whether we currently do that).".
Hello everyone, so after a few more tests, I've uncovered a bug and perhaps a different meaning for fallback style. First, the bug: if you set fallback style to "none", clang-format will perform no replacements. This happens because getStyle will first initialize its local Style variable to LLVM style, and then because a fallback style is set, will then set it to the "none" style, will ends up setting Style.DisableFormatting to true. After that, when we parse YAML (either from Style arg or a config file), we use the Style variable as the "template" for fields that haven't been set. In this case, the "none" fallback style causes DisableFormatting to remain true, so no formatting will take place.
As it happens, my first diff patch uploaded here fixed this issue by accident. Instead of reusing the same local Style variable, I declared one for each case where we'd need to parse. The fallback style case would use its own variable, FallbackStyle, which would not be used as the template style when parsing the YAML config.
What's interesting is that the way the code is originally written allows you to use fallback style as a way to set the "base" configuration for which the subsequently parsed YAML overlays. For example, if I don't set fallback style, the assumed base style is "LLVM", and any YAML parsed modifies this LLVM base style. But if I pass a fallback style of "Mozilla", then this becomes the base style over which the YAML overlays.
So to my mind, we have 2 approaches to fix the "none" style bug:
- Go with a similar approach to what I did originally; that is, we always assume LLVM as the base style, and make sure that the fallback style is not used as the base style, but rather only as the style to return if none is found. I think this is what FallbackStyle was originally intended for.
- Allow fallback style to maintain its current meaning - that is, as a way to set the base style when "style" is "file" or YAML. In this case, I believe the right thing is to treat FallbackStyle set to "none" as though no fallback style were passed in at all. Concretely, we might want t to modify getPredefinedStyle to return LLVM style when "none" is passed in, instead of what it does now. I personally think this is more confusing, and also introduces more risk.
Let me know what you think. If we go with option 1, I could fold the fix into this change.
This is a good YAQ, which IMO should be tackled in a separate patch. In this patch though, it might be easier to proceed by keeping the original behavior and leaving a FIXME. In general, reviewers like smaller patches with single purpose :)
Some nits. Some is almost good :)
BTW, do you have clang-tools-extra in your source tree? There are also some references in the subtree to the changed interface. It would be nice if you could also fix them in a separate patch and commit these two patches together (I mean, within a short period of time) so that you wouldn't break build bots.
References should be found in these files:
extra/change-namespace/ChangeNamespace.cpp extra/clang-move/ClangMove.cpp extra/include-fixer/tool/ClangIncludeFixer.cpp extra/clang-apply-replacements/tool/ClangApplyReplacementsMain.cpp extra/clang-tidy/ClangTidy.cpp
Thanks!
I'll grab clang-tools-extras and make the second patch as you suggest. Btw, can you explain how I would avoid breaking build bots? I assume you mean that clang-tools-extras gets built separately against some version of clang, which gets auto-updated. When would I know the right time to push the second patch through?
Also, I assume I'd have to get this second patch approved before ever pushing the first, right?
The patch LGTM now. I'll accept both this and the one for clang-tool-extra when it is ready.
Regarding builbots, we have bots that continually run builds/tests (). Many buildbots test llvm and clang as well as clang-tools-extra (e.g. with ninja check-all) at some revision. Also note that although llvm/clang and clang-tools-extra are different repos, they do share the same revision sequence. So if clang-tools-extra is in a inconsistent state, many buildbots can fail and affect llvm/clang builds. Unfortunately, there is no atomic way to commit two revisions to two repositories, so we just commit them quickly one after another so that we do less damage. Do you have commit access to LLVM btw?
Minor comment change, turned the ObjC test into a non-fixture test, and renamed FormatStyleOrError to FormatStyle in format function.
I do have commit access. I'll get to work on the clang-tools-extras and open a new review for it once it's ready. Thanks. | https://reviews.llvm.org/D28081?id=82419 | CC-MAIN-2020-10 | refinedweb | 960 | 70.23 |
Delayed
Do you need to do any of the following?
- schedule an activity for some time in the future
- periodically query the server or update the interface
- queue up work to do that must wait for other initialization to finish
- perform a large amount of computation
GWT provides three classes that you can use to defer running code until a later point in time: Timer, DeferredCommand, and IncrementalCommand.
- Scheduling work: the Timer class
- Deferring some logic into the immediate future: the DeferredCommand class
- Avoiding Slow Script Warnings: the IncrementalCommand class
Scheduling work: the Timer class
Use the Timer class to schedule work to be done in the future.
To create a timer, create a new instance of the Timer class and then override the run() method entry point.
Timer timer = new Timer() { public void run() { Window.alert ("Timer expired!"); } }; // Execute the timer to expire 2 seconds in the future timer.schedule(2000);
Notice that the timer will not have a chance to execute the run() method until after control returns to the JavaScript event loop.
Creating Timeout Logic
One typical use for a timer is to timeout a long running command. There are a few rules of thumb to remember in this situation:
- Store the timer in an instance variable.
- Always check to see that the timer is not currently running before starting a new one. (Check the instance variable to see that it is null.)
- Remember to cancel the timer when the command completes successfully.
- Always set the instance variable to null when the command completes or the timer expires.
Below is a an example of using a timeout with a Remote Procedure Call (RPC).
import com.google.gwt.user.client.Timer; import com.google.gwt.user.client.Window; import com.google.gwt.user.client.rpc.AsyncCallback; public class Foo { // A keeper of the timer instance in case we need to cancel it private Timer timeoutTimer = null; // An indicator when the computation should quit private boolean abortFlag = false; static final int TIMEOUT = 30; // 30 second timeout void startWork () { // ... // Check to make sure the timer isn't already running. if (timeoutTimer != null) { Window.alert("Command is already running!"); return; } // Create a timer to abort if the RPC takes too long timeoutTimer = new Timer() { public void run() { Window.alert("Timeout expired."); timeoutTimer = null; abortFlag = true; } }; // (re)Initialize the abort flag and start the timer. abortFlag = false; timeoutTimer.schedule(TIMEOUT * 1000); // timeout is in milliseconds // Kick off an RPC myService.myRpcMethod(arg, new AsyncCallback() { public void onFailure(Throwable caught) { Window.alert("RPC Failed:" + caught); cancelTimer(); } public void onSuccess(Object result) { cancelTimer(); if (abortFlag) { // Timeout already occurred. discard result return; } Window.alert ("RPC returned: "+ (String)result); } } } // Stop the timeout timer if it is running private void cancelTimer() { if (timeoutTimer != null) { timeoutTimer.cancel(); timeoutTimer = null; } } }
Periodically Running Logic
In order to keep a user interface up to date, you sometimes want to perform an update periodically. You might want to run a poll to the server to check for new data, or update some sort of animation on the screen. In this case, use the Timer class scheduleRepeating() method:
public class Foo { // A timer to update the elapsed time count private Timer elapsedTimer; private Label elapsedLabel = new Label(); private long startTime; public Foo () { // ... Add elapsedLabel to a Panel ... // Create a new timer elapsedTimer = new Timer () { public void run() { showElapsed(); } }; startTime = System.currentTimeMillis(); // Schedule the timer for every 1/2 second (500 milliseconds) elapsedTimer.scheduleRepeating(500); // ... The elapsed timer has started ... } /** * Show the current elapsed time in the elapsedLabel widget. */ private void showElapsed () { double elapsedTime = (System.currentTimeMillis() - startTime) / 1000.0; NumberFormat n = NumberFormat.getFormat("#,##0.000"); elapsedLabel.setText("Elapsed: " + n.format(elapsedTime)); } }
Deferring some logic into the immediate future: the Scheduler class
Sometimes you want to break up your logic loop so that the JavaScript event loop gets a chance to run between two pieces of code. The Scheduler class will allow you to do that. The logic that you pass to
Scheduler will run at some point in the future, after control has been returned to the JavaScript event loop. This little delay may give the interface a chance to process some user events or initialize other code. To use the
Scheduler class in its simplest form, you create a subclass of the Command class, overriding the execute() method and pass it to Scheduler.scheduleDeferred
TextBox dataEntry; // Set the focus on the widget after setup completes. Scheduler.get().scheduleDeferred(new Command() { public void execute () { dataEntry.setFocus(); } }); dataEntry = new TextBox();
Avoiding Slow Script Warnings: the IncrementalCommand class
AJAX developers need to be aware of keeping the browser responsive to the user. When JavaScript code is running, user interface components like buttons and text areas will not respond to user input. If the browser were to allow this to continue, the user might think the browser is “hung” and be tempted to restart it. But browsers have a built-in defense mechanism, the unresponsive script warning.
Any script that runs without returning control to the JavaScript main event loop for more than 10 seconds or so runs the risk of having the browser popup this dialog to the user. The dialog is there because a poorly written script might have an infinite loop or some other bug that is keeping the browser from responding. But in AJAX applications, the script may be doing legitimate work.
GWT provides an IncrementalCommand class that helps perform long running calculations. It works by repeatedly calling an ‘execute()’ entry point until the computation is complete.
The following example is an outline of how to use the IncrementalCommand class to do some computation in a way that allows the browser’s user interface to be responsive:
public class IncrementalCommandTest implements EntryPoint { // Number of times doWork() is called static final int MAX_LOOPS = 10000; // Tight inner loop in doWork() static final int WORK_LOOP_COUNT = 50; // Number of times doWork() is called in IncrementalCommand before // returning control to the event loop static final int WORK_CHUNK = 100; // A button to kick off the computation Button button; public void onModuleLoad() { button = new Button("Start Computation"); button.addClickHandler(new ClickHandler () { public void onClick(ClickEvent event) { doWorkIncremental(); } } } /** * Create a IncrementalCommand instance that gets called back every so often * until all the work it has to do is complete. */ private void doWorkIncremental () { // Turn off the button so it won't start processing again. button.setEnabled(false); IncrementalCommand ic = new IncrementalCommand(){ int counter = 0; public boolean execute() { for (int i=0;i<WORK_CHUNK;i++) { counter++; result += doWork(); // If we have done all the work, exit with a 'false' // return value to terminate further execution. if (counter == MAX_LOOPS) { // Re-enable button button.setEnabled(true); // ... other end of computation processing ... return false; } } // Call the execute function again. return true; } }; // Schedule the IncrementalCommand instance to run when // control returns to the event loop by returning 'true' Scheduler.get().scheduleIncremental(ic); } /** * Routine that keeps the CPU busy for a while. * @return an integer result of the calculation */ private int doWork() { int result; // ... computation... return result; } | https://www.gwtproject.org/doc/latest/DevGuideCodingBasicsDelayed.html | CC-MAIN-2022-21 | refinedweb | 1,159 | 54.52 |
Introduction
In the first article in this bot series , we looked at the basics of the Microsoft Bot Framework, including how to send a basic message reply. In this article, we will take a look at how to use LUIS to recognize what the user is saying and how to apply that recognition to drive a conversation.
Getting Started with LUIS
LUIS is part of the Microsoft Cognitive Services suite hosted inside of Azure. LUIS performs natural language understanding and translates a sentence or phrase into an easy-to-program for intent. It can even extract key parts of a word or phrase without you ever needing to touch complex parsing code.To get started, head on over to
and create an account.
LUIS is organized into intents, utterances, and entities. An intent is a thing you want to recognize, like "The user wants an to know the account balance." An utterance is a phrase or sentence that a user might say that you want to map to that intent. You can map many utterances to a single intent. An entity is used as a parameter for the intent.
In this example, we are creating a BankerBot that the user can ask what the balances of their various accounts are. The user will ask what their account balance is and will need to name a specific account like, "Checking," "Money Market," or "Savings." The account type in this example is the entity. You will need to create this as a custom entity type before you create your first intent.
To create the entity, click the "+" symbol next to the "entity" menu and you will get a simple dialog that lets you name the entity. You also can define child entities here, but that isn't necessary for this step. If you create child entities, it will create a defined ontology of members that should map to the entity.
To create an intent, click the "+" symbol next to the intents menu. You'll see a dialog like the one in Figure 1. We'll label this intent "Account Balances" and add a parameter called "account" that maps to the custom AccountType entity. When you create a new entity for the first time, you also can give it an example utterance for it to map to.
Figure 1: Adding a new intent
Once your intent has been created, you can map as many utterances to the intent as you want by clicking the "New Utterances" tab at the top of the page. You don't have to precisely create each variation of the phrases you want to match. In general, LUIS is good at mapping to small variations on the sentence, but you will want to grow the utterances supported over time. LUIS helps you learn what to add under the "Suggest" tab. The suggest tab keeps a catalog of phrases it saw from end users that it wasn't sure how to map to intents and you can use this page to see what those phrases are. After you've seen what users are saying that isn't matched, you're only a few short clicks away from mapping it to your existing intents.
Let's Have a Conversation: Integrating LUIS into Your Bot
LUIS is excellent at categorizing utterances into intents, but a non-trivial bot needs to be able to have a conversation with the user. The user should be able to ask questions, get answers, and ask follow-up questions that retain context.
The way you can do this with LUIS is to create a sub-class of the LuisDialog object. The LuisDialog object lets you define what methods map to what intents and build out dialog trees under those intents. The LuisModel decorator on the object lets you identify your app by your subscription ID and identify which model you're using, as seen in the following snippet. This code snippet also defines some basic mock data.
[LuisModel("YourLUISModelID", "YourSubscriptionID")] [Serializable] public class BankerBotDialog : LuisDialog<object> { public BankerBotDialog(ILuisService service = null) : base(service) { // MOCK DATA this.Accounts.Add("checking", new Account() { AccountNumber = "113122231", AccountType = "Checking", Balance = 11416.25M }); this.Accounts.Add("savings", new Account() { AccountNumber = "612351251", AccountType = "Savings", Balance = 51618.88M }); }
Next, you will want to map one of your methods to a LUIS intent. The following method includes an attribute, called LuisIntent, that you can use to map this method as being the handler for that intent. It passes in an IDialogContext that includes information about the current chat session and a LuisResult that includes the original text the user sent along with LUIS' top recommended matches. By default, you know that your method was the top recommended method from LUIS, but sometimes it's helpful to see what some of the other near-miss intents are.
The next code segment uses a helper method we'll look at in a second, called "TryFindAccount," to see which account the user wants to see the balance on. If it gets a match, it uses the context object to post the response to the user. If it doesn't match an account, it will ask the user a follow-up question as to which account they want the balance on.
[LuisIntent("Account Balances")] public async Task AccountBalanceAction(IDialogContext context, LuisResult result) { if (TryFindAccount(result, out this.CurrentAccount)) { await context.PostAsync(string.Format("Your current account balance "+ "for {0} ({1}) is {2}.", this.CurrentAccount.AccountNumber, this.CurrentAccount.AccountType, this.CurrentAccount.Balance)); } else { await context.PostAsync("Which account do you want a balance on?"); } context.Wait(MessageReceived); }
The TryFindAccount method looks at the LuisResult and attempts to pull out a matched EntityType. It then takes this matched entity and sees if we can find a matching account in the mock-data account collection.
public bool TryFindAccount(LuisResult result, out Account CurrentAccount) { CurrentAccount = null; string what; EntityRecommendation title; if (result.TryFindEntity(AccountTypeEntity, out title)) { what = title.Entity; } else { what = DefaultAccount; } return this.Accounts.TryGetValue(what.ToLower(), out CurrentAccount); }
The final step in this process is to wire up the LuisDialog created into the main MessagesController.cs. Modify the Post method as seen below to pass the incoming message to your LuisDialog.
public async Task<Message> Post([FromBody]Message message) { if (message.Type == "Message") { message.BotPerUserInConversationData = null; return await Conversation.SendAsync(message, () => new BankerBotDialog()); } else { return HandleSystemMessage(message); } }
Exploring the Source Code
The source code for this example can be downloaded here. You will need to have the Microsoft Bot Framework installed, as described in the first article in this series as a prerequisite. You also will need to open a free account at luis.ai and create a model that matches the source code's LUIS intents. You also will need to modify the AppIDs and AppSecrets in the web.config file and LuisDialog, respectively, for this source code to work.
The code in MessagesController.cs is mostly boilerplate code from the default template with the minor changes noted in the examples above in the Post function. BankerBotDialog.cs contains the implementation of the LuisDialog object and has most of the operative code.
Conclusion
LUIS makes it surprisingly easy to add sophisticated language meaning recognition to your application. In addition to the easy integration to the Microsoft Bot Framework explored in this article, you also can use their API directly to send it utterances and get recognized meanings back.
In the next article in this series, we will take a look at some architectural approaches toward building a better bot!
About the Author
David Talbotis the director of Architecture & Strategy at a leading digital bank. He has almost two decades of innovation across many startups and has written extensively on technology. | http://126kr.com/article/8i4vl619cul | CC-MAIN-2016-50 | refinedweb | 1,280 | 54.73 |
Continuing on my series of going through the first 100 Project Euler problems, we're on to problem two. Here's problem one in case you missed it.
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms
The Fibonacci sequence is a series of numbers in which the next number is the sum of the previous two.
The first time I solved this I think I probably checked if each number in the sequence was even, with a modulo similar to problem one, such as:
item % 2 == 0
But it occurred to me that the numbers follow a pattern,
odd, odd, even so I could sum the even terms by adding together every third term.
I wrote a method to sum together all even Fibonacci numbers less than n. The solution is below or on GitHub
def sumOfEvenFibs(n): terms = [1, 1, 2] total = 0 while terms[2] <= n: total += terms[2] t0 = terms[1] + terms[2] t1 = terms[2] + t0 terms = [t0, t1, t0 + t1] return total print(sumOfEvenFibs(4000000))
Interested to see how others approached this one!
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/valeriecodes/fibonacci-numbers-and-project-euler-problem-two-17m3 | CC-MAIN-2022-21 | refinedweb | 195 | 57.23 |
Converting between LCIDs and RFC 1766 language codes
Raymond
Occasionally, I see someone ask for a function that
converts between LCIDs (such as 0x0409 for English-US)
and RFC 1766 language identifiers (such as “en-us”).
The rule of thumb is, if it’s something a web browser would need,
and it has to do with locales and languages,
you should look in
the MLang library.
In this case, the
IMultiLanguage::GetRfc1766FromLcid method does the trick.
For illustration, here’s a program that takes US-English
and converts it to RFC 1766 format.
For fun, we also convert “sv-fi” (Finland-Swedish) to an LCID.
#include <stdio.h> #include <ole2.h> #include <oleauto.h> #include <mlang.h> int __cdecl main(int argc, char **argv) { HRESULT hr = CoInitialize(NULL); if (SUCCEEDED(hr)) { IMultiLanguage * pml; hr = CoCreateInstance(CLSID_CMultiLanguage, NULL, CLSCTX_ALL, IID_IMultiLanguage, (void**)&pml); if (SUCCEEDED(hr)) { // Let's convert US-English to an RFC 1766 string BSTR bs; LCID lcid = MAKELCID(MAKELANGID(LANG_ENGLISH, SUBLANG_ENGLISH_US), SORT_DEFAULT); hr = pml->GetRfc1766FromLcid(lcid, &bs); if (SUCCEEDED(hr)) { printf("%ws\n", bs); SysFreeString(bs); } // And a sample reverse conversion just for good measure bs = SysAllocString(L"sv-fi"); if (bs && SUCCEEDED(pml->GetLcidFromRfc1766(&lcid, bs))) { printf("%x\n", lcid); } SysFreeString(bs); pml->Release(); } CoUninitialize(); } return 0; }
When you run this program, you should get
en-us 81d
“en-us” is the RFC 1766 way of saying “US-English”,
and 0x081d is
MAKELCID(MAKELANGID(LANG_SWEDISH,.
SUBLANG_SWEDISH_FINLAND), SORT_DEFAULT)
If you browse around, you’ll find lots of other interesting
functions in the MLang library.
You may recall that earlier
we saw how to use MLang to display strings without those ugly boxes.
Update (January 2008):
The globalization folks have told me that they’d prefer that
people didn’t use MLang.
They recommend instead the functions
LCIDToLocaleName
and
LocaleNameToLCID.
The functions are built into Windows Vista
and are also
available downlevel via a redistributable. | https://devblogs.microsoft.com/oldnewthing/?p=32753 | CC-MAIN-2019-35 | refinedweb | 315 | 51.99 |
2012-08-30
Safer handling of C memory in ATS
In previous ATS posts I’ve written about how ATS can make using C functions safer by detecting violations of the C API’s requirements at compile time. This post is a walkthrough of a simple example which involves a C api that copies data from a buffer of memory to one allocated by the caller. I’ll start with an initial attempt with no safety beyond a C version and work through different options.
The API I’m using for the example is a base64 encoder from the stringencoders library. The C definition of the function is:
size_t modp_b64_encode(char* dest, const char* str, size_t len);
Given a string,
str, this will store the base64 encoded value of
len bytes of that string into
dest.
dest must be large enough to hold the result. The documentation for
modp_b64_encode states:
destshould be allocated by the caller to contain at least
((len+2)/3*4+1)bytes. This will contain the null-terminated b64 encoded result.
strcontains the bytes
lencontains the number of bytes in
str
- returns length of the destination string plus the ending null byte. i.e. the result will be equal to
strlen(dest) + 1.
A first attempt
A first attempt at wrapping this in ATS is:
extern fun modp_b64_encode (dest: !strptr1, str: string, len: size_t): size_t = "mac#modp_b64_encode"
dest is defined to be a linear non-null string. The
! means that the C function does not free the string and does not store it so we are responsible for allocating the memory and freeing it. The following function uses this API to convert a string into a base64 encoded string:
extern fun string_to_base64 (s: string): strptr1 implement string_to_base64 (s) = let val s_len = string_length (s) // 1 val d_len = size1_of_size ((s_len + 2) / 3 * 4) // 2 val (pfgc, pf_bytes | p_bytes) = malloc_gc (d_len + 1) // 3 val () = bytes_strbuf_trans (pf_bytes | p_bytes, d_len) // 4 val dest = strptr_of_strbuf @(pfgc, pf_bytes | p_bytes) // 5 val len = modp_b64_encode (dest, s, s_len) // 6 in dest // 7 end
A line by line description of what this functions does follows:
- Get the length of the input string as
s_len
- Compute the length of the destination string as
d_len. This does not include the null terminator.
- Allocate
d_lennumber of bytes plus one for the null terminator. This returns three things.
pfgcis a proof variable that is used to ensure we free the memory later.
pf_bytesis a proof that we have
d_len+1bytes allocated at a specific memory address. We need to provide this proof to other functions when we pass the pointer to the memory around so that the compiler can check we are using the memory correctly.
p_bytesis the raw pointer to the memory. We can’t do much with this without the proof
pf_bytessaying what the pointer points too.
- We have a proof that says we have a raw array of bytes. What we want to say is that this memory is actually a pointer to a null terminated string. In ATS this is called a
strbuf. The function
bytes_strbuf_transconverts the proof
pf_bytesfrom “byte array of length n” to “null terminated string of length n-1, with null terminator at n”.
- Now that we have a proof saying our pointer is a string buffer we can convert that to a
strptr1. The function
strptr_of_strbufdoes this conversion. It consumes the
pfgcand
pf_bytesand returns the
strptr1.
- Here we call our FFI function.
- Returns the resulting
strptr1
string_to_base64 is called with code like:
implement main () = let val s = string_to_base64 "hello_world" in begin print s; print_newline (); strptr_free s end end
Although this version of the FFI usage doesn’t gain much safety over using it from C, there is some. I originally had the following and the code wouldn’t type check:
val d_len = size1_of_size ((s_len + 2) / 3 * 4 + 1) // 2 val (pfgc, pf_bytes | p_bytes) = malloc_gc d_len // 3 val () = bytes_strbuf_trans (pf_bytes | p_bytes, d_len) // 4
In line 2 I include the extra byte for the null terminator. But line 4 uses this same length when converting the byte array proof into a proof of having a string buffer. What line 4 now says is we have a string buffer of length
d_len with a null terminator at
d_len+1. The proof
pf_bytes states we only have a buffer of length
d_len so fails to type check. The ability for ATS to typecheck lengths of arrays and knowing about requiring null terminators in string buffers saved the code from an off by one error here.
A version of this code is available in github gist 3522299. This can be cloned and used to work through the following examples trying out different approachs.
$ git clone git://gist.github.com/3522299.git example $ cd example $ make $ ./base64
Typecheck the source buffer
One issue with this version of the wrapper is that we can pass an invalid length of the source string:
val len = modp_b64_encode (dest, s, 1000)
Since
s is less than length
1000 this will access memory out of bounds. This can be fixed by using dependent types to declare that the given length must match that of the source string length:
extern fun modp_b64_encode {n:nat} (dest: !strptr1, str: string n, len: size_t n): size_t = "mac#modp_b64_encode"
We declare that a sort
n exists that is a natural number. The
str argument is a string of length
n and the length passed is that same length. This is now a type error:
val s = string1_of_string "hello" val len = modp_b64_encode (dest, s, 1000)
Unfortunately so is this:
val s = string1_of_string "hello" val len = modp_b64_encode (dest, s, 2)
Ideally we want to be able to pass in a length of less than the total string length so we can base64 encode a subset of the source string. The following FFI declaration does this:
extern fun modp_b64_encode {n:nat} (dest: !strptr1, str: string n, len: sizeLte n): size_t = "mac#modp_b64_encode"
sizeLte is a prelude type definition that’s defined as:
typedef sizeLte (n: int) = [i:int | 0 <= i; i <= n] size_t (i)
It is a
size_t where the value is between
0 and
n inclusive. Our
string_to_base64 function is very similar to what it was before but uses
string1 functions on the string.
string1 is a dependently typed string which includes the length in the type:
extern fun string_to_base64 {n:nat} (s: string n): strptr1 implement string_to_base64 (s) = let val s_len = string1_length (s) val d_len = (s_len + 2) / 3 * 4 val (pfgc, pf_bytes | p_bytes) = malloc_gc (d_len + 1) val () = bytes_strbuf_trans (pf_bytes | p_bytes, d_len) val dest = strptr_of_strbuf @(pfgc, pf_bytes | p_bytes) val len = modp_b64_encode (dest, s, s_len) in dest end
Typecheck the destination buffer
The definition for
modp_b64_encode defines the destination as being a linear string. This has no length defined at the type level so it’s possible to pass a linear string that is too small and result in out of bounds memory access. this version of the FFI definition changes the type to a
strbuf.
As explained previously an
strbuf is a string buffer that is null terminated, It has two type level arguments:
abst@ype strbuf (bsz: int, len: int)
The first,
bsz is the length of the entire buffer. The second,
len is the length of the string. So
bsz should be greater than
len to account for the null terminator.
The new version of the
modp_b64_encode functions is quite a bit more complex so I’ll go through it line by line:
extern fun modp_b64_encode {l:agz} // 1 {n,bsz:nat | bsz >= (n + 2) / 3 * 4 + 1} // 2 (pf_dest: !b0ytes bsz @ l >> strbuf (bsz, rlen - 1) @ l | // 3 dest: ptr l, // 4 str: string n, len: sizeLte n ): #[rlen:nat | rlen >= 1;rlen <= bsz] size_t rlen // 5 = "mac#modp_b64_encode"
Put simply, this function definition takes a pointer to a byte array enforced to be the correct size and after calling enforces that the pointer is now a pointer to a null terminated string with the correct length. In detail:
- This line declares a dependent type variable
lof sort
agz. Sort
agzis an address greater than zero. In otherwords it’s a non-null pointer address. It’s used to allow the definition to reason about the memory address of the destination buffer.
- Declare dependent type variables
nand
bszof sort
nat. Sort
natis an integer greater than or equal to zero.
bszis additionally constrained to be greator or equal to
(n+2)/3*4+1. This is used to ensure that the destination buffer is at least the correct size as defined by the documentation for
modp_b64_encode. In this way we can enforce the constraint at the type level.
- All function arguments to the left of the
|symbol in ATS are proof arguments. This line defines
pf_destwhich on input should be a proof that an array of bytes of length
bszis held at memory address
l(
b0ytesis a typedef for an array of uninitialized bytes). The
!states that this function does not consume the proof. The
>>means that after the function is called the type of the proof changes to the type on the right hand side of the
>>. In this case it’s a string buffer at memory address
lof total length
bsz, but the actual string is of length
rlen-1(the result length as described next).
- The
destvariable is now a simple pointer type pointing to memory address
l. The proof
pf_destdescribes what this pointer actually points to.
- We define a result dependent type
rlenfor the return value. This is the length of the returned base64 encoded string plus one for the null terminator. We constrain this to be less than or equal to
bszsince it’s not possible for it to be longer than the destination buffer. We also constrain it to be greater than or equal to one since a result of an empty string will return length one. The
#allows us to refer to this result dependent type in the function arguments which we do for
pf_destto enforce the constraint that the string buffer length is that of the result length less one.
Our function to call this is a bit simpler:
extern fun string_to_base64 {n:nat} (s: string n): strptr strptr_of_strbuf @(pfgc, pf_bytes | p_bytes) end
We no longer need to do the manual conversion to a
strbuf as the definition of
modp_b64_encode changes the type of the proof for us.
Handle errors
So far the definitions of
modp_64_encode have avoided the error result. If the length returned is
-1 there was an error. In our last definition of the function we changed the type of the proof to a
strbuf after calling. What we actually need to do is only change the type if the function succeeded. On failure we must leave the proof the same as when the function was called. This will result in forcing the caller to check the result for success so they can unpack the proof into the correct type. Here’s the new, even more complex, definition:
dataview encode_v (int, int, addr) = | {l:agz} {bsz:nat} {rlen:int | rlen > 0; rlen <= bsz } encode_v_succ (bsz, rlen, l) of strbuf (bsz, rlen - 1) @ l | {l:agz} {bsz:nat} {rlen:int | rlen <= 0 } encode_v_fail (bsz, rlen, l) of b0ytes bsz @ l extern fun modp_b64_encode {l:agz} {n,bsz:nat | bsz >= (n + 2) / 3 * 4 + 1} (pf_dest: !b0ytes bsz @ l >> encode_v (bsz, rlen, l) | // 1 dest: ptr l, str: string n, len: sizeLte n ): #[rlen:int | rlen <= bsz] size_t rlen = "mac#modp_b64_encode"
First we define a view called
encode_v to encode the result that the destination buffer becomes. A dataview is like a datatype but is for proofs. It is erased after type checking is done. The view
encode_v is dependently typed over the size of the buffer
bsz, the length of the result,
rlen, and the memory address,
l.
The first view constructor,
encode_v_succ, is for when the result length is greater than zero. In that case the view contains a
strbuf. This is the equivalent of the case in our previous iteration where we only handled success.
The second view constructor,
encode_v_fail, is for the failure case. If the result length is less than or equal to zero then the view contains the original array of bytes.
In the line marked
1 above, we’ve changed the type the proof becomes to that of our view. Notice that the result length is one of the dependent types of the view. Now when calling
modp_b64_encode we must check the return type so we can unpack the view to get the correct proof. Here is the new calling code:
extern fun string_to_base64 {n:nat} (s: string n): strptr if len > 0 then let prval encode_v_succ pf = pf_bytes // 1 in strptr_of_strbuf @(pfgc, pf | p_bytes) end else let prval encode_v_fail pf = pf_bytes // 2 val () = free_gc (pfgc, pf | p_bytes) in strptr_null () end end
The code is similar to the previous iteration until after the
modp_b64_encode call. Once that call is made the proof
pf_bytes is now an
encode_v. This means we can no longer use the pointer
p_bytes as no proof explaining what exactly it is pointing to is in scope.
We need to unpack the proof by pattern matching against the view. To do this we branch on a conditional based on the value of the length returned by
modp_b64_encode. If this length is greater than zero we know
pf_bytes is a
encode_v_succ. We pattern match on it in the line marked 1 to extract our proof that
p_bytes is an
strbuf and can then turn that into a
strptr and return it.
If the length is not greater than zero we know that
pf_bytes must be an
encode_v_fail. We pattern match on this in the line marked 2, extracting the proof that it is the original array of bytes we allocated. This is free’d in the following line and we return a null strptr. The type of
string_to_base64 has changed to allow returning a null pointer.
We know the types of the
encode_v proof based on the result length because this is encoded in the definition of
encode_v. In that dataview definition we state what the valid values
rlen is for each case. If our condition checks for the wrong value we get a type error on the pattern matching line. If we fail to handle the success or the failure case we get type errors for not consuming proofs correctly (eg. for not freeing the allocated memory on failure).
Conclusion
The result of our iterations ends up providing the following safety guarantees at compile time:
- We don’t exceed the memory bounds of the source buffer
- We don’t exceed the memory bounds of the destination buffer
- The destination buffer is at least the minimium size required by the function documentation
- We can’t treat the destination buffer as a string if the function fails
- We can’t treat the destination buffer as an array of bytes if the function succeeds
- Off by one errors due to null terminator handling are removed
- Checking to see if the function call failed is enforced
The complexity of the resulting definition is increased but someone familiar with ATS can read it and know what the function expects. The need to also read function documentation and find out things like the minimum size of the destination buffer is reduced.
When the ATS compiler compiles this code to C code the proof checking is removed. The resulting C code looks very much like handcoded C without runtime checks. Something like the following is generated:
char* string_to_base64(const char* s) { int s_len = strlen (s); int d_len = (s_len + 2) / 3 * 4 + 1; void* p_bytes = malloc (d_len) int len = modp_b64_encode (p_bytes, s, s_len); if (len > 0) return (char*)p_bytes; else { free(p_bytes); return 0; } } | http://bluishcoder.co.nz/2012/08/30/safer-handling-of-c-memory-in-ats.html | CC-MAIN-2014-52 | refinedweb | 2,618 | 66.57 |
1.java It's interpreted language .java The virtual machine can compile and run many times at a time .
2.JDK(java software Development kit Software development package ),JRE(java Runtime Environment java Running environment ).
3.javac compile java Program ,java function java Program .
4. A file can have at most one public class.
5.java in switch Statement can only detect int Type values (JDK1.6 before ).
6. stay java One byte in is eight bits , A character takes up two bytes (16 position unicode character string ).
7. In the memory byte Occupy 1 Bytes ,int Occupy 4 Bytes ,long Type account 8 Bytes ;float Occupy 4 Bytes ,double Occupy 8 Bytes ;boolean Type account 1 Bytes ;
8.java No overloading of operators is provided .
9. stay static Cannot access non in method static Members of .static The method is to add... Before the function static qualifiers , Such as :public static vooid main(String args[]);public static void print();
10. It's a convention to package Name your company's domain name upside down , Then follow the project name . Such as :cn.edu.jxau.Game24.
11. The default access rights are default.
12. Interfaces can be inherited from each other , Classes can be inherited from each other , But the interface between class and interface can only be realized by class . A class can only inherit one parent class , But you can implement multiple interfaces .
13. Array
The form of a one-dimensional array :
(1), int a[]; a = new int[5]; Equate to int a[] = new int[5];
(2), int[] a; a = new int[5]; Equate to int[] a = new int[5];
The form of a two-dimensional array :
int a[][] = {{1,2}, {3,4,5,6}, {7,8,9}}; A two-dimensional array can be regarded as an array of elements .
java The declaration and initialization of multi-dimensional array should be in the order from high dimension to low dimension , Such as :
Method (1)
int a[][] = new int[3][];
a[0] = new int[2];
a[1] = new int[4];
a[2] = new int[3];// correct
int t[][] = new int[][4];// illegal
Method (2);
int a[][] = new int[3][5];// correct , Assign a two-dimensional array of three rows and five columns .
Study C and C++ It's easy to make mistakes
14. Enhanced for loop
advantage : Enhanced for Loop for traversal array and collection It's quite easy to use . for example :
shortcoming : Array , Cannot easily access subscript values ;
summary : In addition to simply traversing and reading the contents , The use of enhanced for loop .
15. Generic
JKD1.4 Before the type was not clear :
The type that loads the collection is treated as Object treat , And lose your real type ; Transformation is often required when fetching from a collection , Low efficiency , It's easy to make mistakes .
terms of settlement :
When defining a collection, define the type of the collection at the same time ;
Can be defined in Collection When it's time to specify ;
It can also be used in circulation Iterator Appoint .
benefits :
Enhance the readability and stability of the program .
16. Basic concept of thread :
(1), A thread is a sequence control flow within a program .
(2), The difference between threads and processes :
Each process has its own code and data space ( Process context ), There is a big overhead in switching between processes ;
Threads can be thought of as lightweight processes , The same class of threads share code and data space , Each thread has its own running stack and program timer (pc), Low cost of thread switching ;
Multi process : Multiple tasks can be run simultaneously in the operating system ( Program );
Multithreading : Multiple sequential flows execute simultaneously in the same application ;
17. Implementation of threads
Java The thread of java.lang.Thread Class to achieve .VM At startup, there will be a main method (public static void main(){} ) Thread defined . Can be created by Thread To create a new thread . Each thread passes through a specific Thread Object method run() To complete its operation , Method run() Is the thread body . By calling Thread Class start() Method to start a thread .
18. When you can use an interface, don't start from Thread Class inheritance , Because not only can interface methods be implemented with interfaces , And you can inherit other classes .
19.Sleep Method :Thread Static method of (public static void sleep (long millis) throws InterruptedException) Hibernate the current thread ( To suspend execution millis millisecond );
Join Method : Merge a thread ;
Yield Method : Give up CPU, Give other threads the opportunity to execute .
20.synchronized (this) : Lock the current object , When executing the current object, other threads are not allowed to interrupt the insertion . The ways to use it are :
(1),
(2),
21.wait(); Use wait() The premise is to use synchronized Lock in the method .notify The function of the is to wake up other threads ,notifyAll The role of the is to wake up many other threads .
22.wait Methods and sleep Differences in methods :
(1),wait The method is Object Class method ,sleep The method is Thread Class method .
(2),wait when , Other threads can access locked objects ; call wait Method must lock the object .
(3),sleep when , No other thread can access the locked object .
Java Notes More articles about
- Java Notes 00 - Singleton Pattern( Single example summary )
turn : I won't go into the concept of singleton pattern here , Direct demonstration of several different implementations . 0)Eager initialization ...
- Example is given to illustrate Java Medium null
interpreter : Anchor Translation time : 2013 year 11 month 11 Japan Link to the original text : What exactly is null in Java? Let's look at the following sentence first : String x = null; 1. This statement ...
- Write high quality code to improve java programmatic 151 A suggestion ——[1-3] Basics ? It's also the foundation
Original address : ( Mud, brick, tile, pulp, carpenter ), What needs to be reprinted , Keep it ! Thanks The reasonable man adapts himse ...
- Example is given to illustrate Java Medium null( turn )
Let's look at the following sentence first : String x = null; 1. What exactly does this statement do ? Let's review what variables are , What are variable values . A common metaphor is Variable is equivalent to a box . It's like using boxes to store things ...
- Java Class notes ( 3、 ... and ): Abstract classes and interfaces
In the article object oriented , We said that one function of polymorphism is to “ What do you do ” and “ How do you do it? ” Separate from , The method used is to put different concrete implementations in different subclasses , Then pass in a reference to the parent object... To the interface . The content of this blog is interface ( here &qu ...
- CM12.1/13.0 Compilation tutorial
Environment building 1. install 64 position Ubuntu System ( Physical installation . Virtual machine installation is available ) Be careful : The machine is required to at least 4G Memory ( Virtual machines allocate at least 4G Memory ), Hard disk at least 100G Space ( Source code 20G+, After compiling, the whole directory is about 60~70G) Installation method ...
- Spark case analysis
One . demand : Calculate the top three page views import org.apache.spark.rdd.RDD import org.apache.spark.{SparkConf, SparkContext} /* ...
- A Java back-end engineer's study notes
loveincode's notes Study some records of work , Collection . A popular link library Basic computer notes operating system , Compiler principle , computer network , Internet Protocol ... Common data structures and algorithms Java real ...
- [Java coding] leetcode notes
1, How to find the maximum without sorting , The next largest or the smallest ? var int max1, max2, min1; iterate array once: update max1, max2, min1, for ex ...
Random recommendation
- Oracle Basic operation summary
--10g New table space type : A large file (Bigfile) Table space .--10g When the database is created , The default table space type is specified . If not specified , The default is SMALLFILE Type of tablespace .SELECT * ...
- Android custom View AirConditionerView hacking
package com.example.arc.view; import android.content.Context; import android.graphics.Canvas; import ...
- Once a day LeetCode--141.Linked List Cycle( The problem of linked list links )
Given a linked list, determine if it has a cycle in it. Follow up:Can you solve it without using ext ...
- SQL date ( turn )
Usually , You need to get the current date and calculate some other dates , for example , Your program may need to judge the first or last day of a month . Most of you probably know how to split dates ( year . month . Day, etc. ), And then just use the split years . month . Day and so on in a few ...
- entrust 、Lambda expression
This article is from : ...
- Massive data import , terms of settlement , Practice from timing from sqlserver Batch sync data to mySql
c# Code , Batch import data code public class MySql_Target : ZFCommon.DataAccesser.Base.DABase { public MySql_Target() ...
- 『NOIP2018 Popularization group 』
Headline Statistics Title Description Kaikai has just written a wonderful composition , How many characters are there in the title of this composition ? Be careful : The title may contain large . Small letters . Numeric character . Spaces and line breaks . Statistics title words Symbol time , Spaces and line breaks are not counted . Input format ...
- MYSQL Optimize common methods ( Reprint )
1. Select the most appropriate field properties MySQL It can support large amount of data access , But generally speaking , The smaller the table in the database , The faster queries are executed on it . therefore , When creating tables , For better performance , We can set the width of the fields in the table as wide as possible ...
- L146 Space Station Hole Cause Will Be Determined
The head of the U.S. space agency said Tuesday he's sure that investigators will determine the cause ...
- stylus , another css processor
1. install from npm sudo npm install stylus 2. create a styl file named step1.styl border-radius() { ... | https://javamana.com/2021/06/20210610064313166m.html | CC-MAIN-2021-39 | refinedweb | 1,620 | 55.13 |
*
Exception in thread "main" java.lang.NoClassDefFoundError: TestConWindow
Joel Cochran
Ranch Hand
Joined: Mar 23, 2001
Posts: 301
posted
May 25, 2001 16:20:00
0
Exception in thread "main"
java.lang.NoClassDefFoundError
: TestConWindow
Uggghhhh! This program worked fine until today!!! It must be a Classpath thing, but I can't figure it out. The class compiles fine but then receives this error when I try to execute. Could this message be any LESS helpful?
Here is the code. The 'User' class is in the same directory as the TestConWindow.class ... I had a classpath problem with this last week which I got around by compiling with
-classpath
. That is the only way I could get the program to compile and adding .; to my classpath does not work.
package camra2; import javax.swing.*; import java.awt.*; import java.awt.event.*; import java.sql.*; public class TestConWindow { private static JFrame aWindow = new JFrame( "This is a Centered Window" ); private static String[][] TestCon() { String clientInfo[][] = new String[ 10 ][ 2 ] ; User userID = new User( "JRC" , "MARK" ); Connection con ; try { Class.forName( userID.getClassForName() ).newInstance(); } catch( ClassNotFoundException e ) { System.out.println( "\nError: " + e ); System.exit( 0 ); } try { con = userID.getConnection(); Statement stmt = con.createStatement(); ResultSet rs = stmt.executeQuery( "SELECT MCLIENT, MCLNAME " + "FROM CGILIB2.VMCLIENTS " + "ORDER BY MCLIENT"); for( int i = 0 ; rs.next() ; i++ ) { clientInfo[ i ][ 0 ] = rs.getString( 1 ); clientInfo[ i ][ 1 ] = rs.getString( 2 ); } rs.close(); stmt.close(); } catch( Exception e ) { System.out.println( "\nError: " + e.getMessage() ); } finally { try { con.close(); } catch( SQLException e ) { System.out.println( "\nError: " + e.getMessage() ); } } return clientInfo ; } public static void main( String args[] ) { Toolkit theKit = aWindow.getToolkit(); Dimension wndSize = theKit.getScreenSize(); aWindow.setBounds( 50 , 50 , 650 , 450 ); aWindow.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE ); aWindow.setCursor( Cursor.getPredefinedCursor( Cursor.DEFAULT_CURSOR)); aWindow.getContentPane().setBackground(Color.pink); JPanel buttonRow = new JPanel(); JButton button ; Dimension size = new Dimension( 80 , 20 ); buttonRow.add( button = new JButton( "OK" ) ); button.setPreferredSize( size ); ActionListener actionListener = new ActionListener() { public void actionPerformed( ActionEvent actionEvent ) { System.out.println( "Quit Poking Me!!!" ); } }; buttonRow.add( button = new JButton( "React!" ) ); button.setPreferredSize( size ); button.addActionListener( actionListener ); buttonRow.add( button = new JButton( "Cancel" ) ); button.setPreferredSize( size ); Container content = aWindow.getContentPane(); content.setLayout( new BorderLayout() ); Box clientList = Box.createVerticalBox(); String clientData[][] = TestCon(); for( int i = 0 ; i < clientData.length ; i++ ) { if ( clientData[ i ][ 0 ] == null ) { break; } else { clientList.add( new JLabel( clientData[ i ][ 0 ] + ": " + clientData[ i ][ 1 ] ) ); } } content.add( clientList , BorderLayout.CENTER ); content.add( buttonRow , BorderLayout.SOUTH ); aWindow.setVisible( true ); } }
Thanks if anyone can help.
Pulling my hair out...
------------------
I'm a soldier in the NetScape Wars...
Joel
Wait a minute, I'm trying to think of something clever to say...<p>Joel
Nathan Pruett
Bartender
Joined: Oct 18, 2000
Posts: 4121
I like...
posted
May 26, 2001 19:46:00
0
Joel,
In your code, you are specifying that your TestConWindow is in the camra2 package... if you are in the same directory as the class file, try running it this way :
java -classpath .. camra2.TestConWindow
You have to specify the directory directly
before
the beginning of your package in the classpath, and if a class is a member of a package, you have to specify it's full name (i.e. full package path + class name) on the command line.
HTH,
-Nate
-Nate
Write once, run anywhere, because there's nowhere to hide! - /. A.C.
Joel Cochran
Ranch Hand
Joined: Mar 23, 2001
Posts: 301
posted
May 29, 2001 07:05:00
0
I've removed the package reference. Both Classes are in the same directory. They both compiled fine. My classpath includes the current directory. I tried running the program without -classpath and with -classpath. No matter what I receive the same error:
C:\JavaSource\CAMRA2>java TestConWindow Exception in thread "main" java.lang.NoClassDefFoundError: TestConWindow C:\JavaSource\CAMRA2>java -classpath c:\javasource\camra2 TestConWindow Exception in thread "main" java.lang.NoClassDefFoundError: TestConWindow
I don't know what else to do, I don't see anything wrong with the code (same as above w/o package statement), "main" obviously exists, I'm not running the program with ".java" extension, my classpath is set and complete.
Going bald...
------------------
I'm a soldier in the NetScape Wars...
Joel
navin kumar
Greenhorn
Joined: May 22, 2001
Posts: 9
posted
May 31, 2001 08:16:00
0
Hello,
U first set the classpath in environment file i.e.
"autoexec.bat" file. there set as follows
classpath=c:\jdk1.3\<package name>
If further more doubts regarding this U can post here
Nathan Pruett
Bartender
Joined: Oct 18, 2000
Posts: 4121
I like...
posted
May 31, 2001 09:24:00
0
I'm really confused about this one...
I've been trying to
test
your code and there doesn't look like there should be any problem... the only suggestion I can make is to check out the logic in the Class.forName related stuff... in the spoofed User class I made to test your code, if I was returning bogus info ( i.e. a class that did not exist ), I got a similiar error message... strangely, you should not get that error message if you were making the same mistake I was, because the TestConWindow class obviously exists....
Sorry I couldn't help more...
I agree. Here's the link:
subject: Exception in thread "main" java.lang.NoClassDefFoundError: TestConWindow
Similar Threads
JButton + ActionListeners Question
many new frames
JLabel appears as a JButton
problem in drawing lines on a panel
Java grid colouring in game
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/388828/java/java/Exception-thread-main-java-lang | CC-MAIN-2014-15 | refinedweb | 925 | 52.76 |
React - Fun with keys
React uses the
key attribute during its reconciliation phase to decide which elements can be reused for the next render.
They are important for dynamic lists. React will compare the keys of the new element with the previous keys and 1) mount components having a new key 2) unmount components whose keys are not used anymore.
Many React developers heard of the general advice that you should not use
index as a key, but what exactly can go wrong when using
keys in a bad way? What else can we do when we play around with keys?
For better understanding, let’s consider the example of rendering a list of
inputs.
When clicking a button, we will insert a new item with text
Front to the front of the list.
import React from "react"; import { render } from "react-dom"; class Item extends React.PureComponent { state = { text: this.props.text }; onChange = event => { this.setState({ text: event.target.value }); }; componentDidMount() { console.log("Mounted ", this.props.text); } componentWillUnmount() { console.log("Unmounting ", this.props.text); } render() { console.log("rerendering ", this.props.text); const { text } = this.state; return ( <li> <input value={text} onChange={this.onChange} /> </li> ); } } class App extends React.Component { state = { items: [ { text: "First", id: 1 }, { text: "Second", id: 2 } ] }; addItem = () => { const items = [{ text: "Front", id: Date.now() }, ...this.state.items]; this.setState({ items }); }; render() { return ( <div> <ul> {this.state.items.map((item, index) => ( <Item {...item} key={index} /> ))} </ul> <button onClick={this.addItem}>Add Item</button> </div> ); } } render(<App />, document.getElementById("root"));
Using
index as a key the following happens:
Another
Item with text
Second instead of
Front is inserted at the back of the list?
Here’s what happens:
Item is an uncontrolled component: The text the user writes into its
inputfield is stored as
state
- A new data item
{ text: "Front" }is inserted to the beginning of the list data.
- The list is re-rendered with the index value as
key. So the previous components are re-used for the first two data items and given the correct props
Frontand
First, but the state is not updated in
Item. That’s why the first two component instances keep the same text.
- A new component instance is created for
key: 2because no previous matching key is found. It is filled with the
propsof the last list data item which is
Second.
Another interesting point is the
render calls that happen. Item is a
PureComponent, so it only updates when the
text prop (or state) changes:
rerendering Front rerendering First rerendering Second Mounted Second
All components are re-rendered. This happens because the element with
key: 0 is reused for the first data item and receives its
props, but the first data item is now the new
Front object, triggering a
render. The same happens with the other components because the old data items are now all shifted by one place.
So what’s the fix?
The fix is easy, we give each list data item a unique
id once upon creation (not on each render!).
All components instances will be matched with their corresponding data item, i.e., they receive the same
props as before - avoiding another
render.
Ignoring the performance benefits when using
ids in dynamic lists for now, the example shows that bugs introduced by keys only ever happen with regards to uncontrolled components, components that keep internal state.
If we rewrite
Item as a controlled component, by moving the state out of it, the bug is gone.
Why? Again, because the bug was reusing a component for a different data item, and therefore the internal state still reflected the state of the previous data item, but the props of a different one. Making the component controlled, by removing its state completely, we, of course, don’t have this discrepancy anymore. (But there’s still the issue with the unnecessary re-renders.)
Abusing key to fix broken third-party components
React only needs
keys when matching several elements, so setting a key on a single child is not needed.
But it can still be useful to set a key on a single child component.
If you change the key, React will throw away the whole component (unmount it), and mount a new component instance in its place.
Why could this be useful?
Again, we’re coming back to uncontrolled components. Sometimes, you’re using a third-party component and you cannot modify its code to make it controlled.
If a component has some internal state and it’s implemented in a bad way, i.e., the state is derived only once in the constructor, but
getDerivedStateFromProps /
componentWillReceiveProps is not implemented to reflect reoccurring
props changes in its internal state, the standard React toolbox cannot help you here. There is no
forceRemount.
However, we can just set a new
key on this component to achieve the desired behavior of completely initializing a new component. The old component will be unmounted, and a new one is mounted with the new
props initializing the
state.
TL;DR:
Using
index as a key can:
- lead to unnecessary re-renders
- introduce bugs when the list items are uncontrolled components but still use
props
The
key property can be used to force a complete remount of a component which can sometimes be useful. | https://cmichel.io/react-fun-with-keys | CC-MAIN-2021-49 | refinedweb | 880 | 64.51 |
Hi everyone
First post on these forums and I'm hoping this will be a good spot to hang out.
I recently started on my new education here in Denmark which focuses on system development and obviously a large part of that is programming. I'm going through some of the initial programming excercises and were doing for loops atm.
We are using the book "Building Java Programs - A back to basics approach (2nd edition)" and im doing an excercise that goes like this:
4. Write a method called printSquare that accepts a minimum and maximum integer and prints a square of lines of
increasing numbers. The first line should start with the minimum, and each line that follows should start with the
next-higher number. The sequence of numbers on a line wraps back to the minimum after it hits the maximum. For
example, the call printSquare(3, 7);
should produce the following output:
34567
45673
56734
67345
73456
If the maximum passed is less than the minimum, the method produces no output.
Currently my code is looking like this, but I can't seem to get the last part right:
public class Opg4Kap3 { public static void main(String[] args) { printSquare(3, 7); } public static void printSquare(int a, int b) { for (int k=a ; k<=b ; k++) { System.out.print(k); for (int i=k+1 ; i<=b ; i++) { System.out.print(i); } for (int r=a ; r<=6 ; r++) { System.out.print(r); } System.out.println(); } } }
If you compile it and run it you can see I'm getting this output, which is close but no cigar:
345673456
45673456
5673456
673456
73456
We don't need to turn this in or anything so I'm not really interested in the final answer, was just hoping someone could give me a clue to the next step so I can finish it though, since it annoys me I can't get the last part right (or if I have to do something completely different to begin with?).
Also this is not supposed to be done with anything but a method and for loops so no if statement tips or anything.
Thanks in advance | http://www.javaprogrammingforums.com/loops-control-statements/17560-beginner-loop-excercise-stuck.html | CC-MAIN-2015-11 | refinedweb | 364 | 64.14 |
Say I want to switch each letter in a message with it's place in the reverse alphabet. Why can't I seem to use the captured group and do it in one gsub?
Perhaps someone could explain in general about using captured groups in gsub, can the back references be bare(no ' ')? Can I use #{\1}?
def decode(message)
a = ('a'..'z').to_a
z = a.reverse
message.gsub!(/([[:alpha:]])/, z[a.index('\1')])
end
decode("the quick brown fox")
Remember that arguments to methods are evaluated immediately and the result of that is passed in to the method. If you want to make the substitution adapt to the match:
message.gsub!(/([[:alpha:]])/) { |m| z[a.index($1)] }
That employs a block that gets evaluated for each match. | https://codedump.io/share/SbnovMCNdqH/1/ruby-regex-capture-groups-in-gsub | CC-MAIN-2017-04 | refinedweb | 128 | 68.77 |
Measurement Power LoPy
- rikkertmark
Hello,
Today I test the current of the LoPy board with a Power supply on the Battery Input.
The Measured values are without USB Power Supply!!!!
I Have create the next python Files:
from machine import UART
import os
uart = UART(0, 115200)
os.dupterm(uart)
import time
while (True):
time.sleep(1)
The Current @t 3.8V = ~115mA
Above current the Wlan is Active.
So I deInit() the Wlan in the console with the next Lines:
from network import WLAN
wlan = WLAN()
wlan.deinit()
The Current now is 108mA @t 3.8V without USB connectie (power supply)
Used the ESP32 processor so much power? How Can I decrease the power?
Esp32 Clock frequency lower?, deep sleep mode? Is this available at this moment?
Or are there any other peripherals that I can turn of?
- jmarcelino
The only optimisation at the moment is enabling WLAN Power Saving in STA mode (basically the WiFi radio only 'wakes up' at the beacon interval set by your gateway) so you'll see current go down to about 50mA and then 100+ mA at the wake intervals.
This is done by re-initilizating WLAN with power_save=True option. However it's not perfect yet, it may caused network timeouts for some people so it became disabled again by default.
Currently calling .deinit() on WiFi and Bluetooth doesn't turn off the peripheral so there's no power savings, just deallocates it. I think this will be changing soon-ish.
Other optimisations and sleep modes are still on the way, deep sleep is being mentioned as being the next one to come. | https://forum.pycom.io/topic/729/measurement-power-lopy | CC-MAIN-2017-34 | refinedweb | 271 | 75.4 |
Using qsort() and bsearch() with strings - example program in C
By: Manoj Kumar Printer Friendly Format
This program makes use of an array of pointers to strings.You can "sort" the strings by sorting the array of pointers. However, this method requires a modification in the comparison function. This function is passed pointers to the two items in the array that are compared. However, you want the array of pointers sorted based not on the values of the pointers themselves but on the values of the strings they point to.
Because of this, you must use a comparison function that is passed pointers to pointers. Each argument to comp() is a pointer to an array element, and because each element is itself a pointer (to a string), the argument is therefore a pointer to a pointer. Within the function itself, you dereference the pointers so that the return value of comp() depends on the values of the strings pointed to.
The fact that the arguments passed to comp() are pointers to pointers creates another problem. You store the search key in buf[], and you also know that the name of an array (buf in this case) is a pointer to the array. However, you need to pass not buf itself, but a pointer to buf. The problem is that buf is a pointer constant, not a pointer variable. buf itself has no address in memory; it's a symbol that evaluates to the address of the array. Because of this, you can't create a pointer that points to buf by using the address-of operator in front of buf, as in &buf.
What to do? First, create a pointer variable and assign the value of buf to it. In the program, this pointer variable has the name key. Because key is a pointer variable, it has an address, and you can create a pointer that contains that address--in this case, key1. When you finally call bsearch(), the first argument is key1, a pointer to a pointer to the key string. The function bsearch() passes that argument on to comp(), and everything works properly.
1: /* Using qsort() and bsearch() with strings. */ 2: 3: #include <stdio.h> 4: #include <stdlib.h> 5: #include <string.h> 6: 7: #define MAX 20 8: 9: int comp(const void *s1, const void *s2); 10: 11: main() 12: { 13: char *data[MAX], buf[80], *ptr, *key, **key1; 14: int count; 15: 16: /* Input a list of words. */ 17: 18: printf("Enter %d words, pressing Enter after each.\n",MAX); 19: 20: for (count = 0; count < MAX; count++) 21: { 22: printf("Word %d: ", count+1); 23: gets(buf); 24: data[count] = malloc(strlen(buf)+1); 25: strcpy(data[count], buf); 26: } 27: 28: /* Sort the words (actually, sort the pointers). */ 29: 30: qsort(data, MAX, sizeof(data[0]), comp); 31: 32: /* Display the sorted words. */ 33: 34: for (count = 0; count < MAX; count++) 35: printf("\n%d: %s", count+1, data[count]); 36: 37: /* Get a search key. */ 38: 39: printf("\n\nEnter a search key: "); 40: gets(buf); 41: 42: /* Perform the search. First, make key1 a pointer */ 43: /* to the pointer to the search key.*/ 44: 45: key = buf; 46: key1 = &key; 47: ptr = bsearch(key1, data, MAX, sizeof(data[0]), comp); 48: 49: if (ptr != NULL) 50: printf("%s found.\n", buf); 51: else 52: printf("%s not found.\n", buf); 53: return(0); 54: } 55: 56: int comp(const void *s1, const void *s2) 57: { 58: return (strcmp(*(char **)s1, *(char **)s2)); 59: } Enter 20 words, pressing Enter after each. Word 1: apple Word 2: orange Word 3: grapefruit Word 4: peach Word 5: plum Word 6: pear Word 7: cherries Word 8: banana Word 9: lime Word 10: lemon Word 11: tangerine Word 12: star Word 13: watermelon Word 14: cantaloupe Word 15: musk melon Word 16: strawberry Word 17: blackberry Word 18: blueberry Word 19: grape Word 20: cranberry 1: apple 2: banana 3: blackberry 4: blueberry 5: cantaloupe 6: cherries 7: cranberry 8: grape 9: grapefruit 10: lemon 11: lime 12: musk melon 13: orange 14: peach 15: pear 16: plum 17: star 18: strawberry 19: tangerine 20: watermelon Enter a search key: orange orange found.
DON'T forget to put your search array into ascending order before using bsearch().
Archived Comments
1. Hi, a have a problem using bsearch. i need return
View Tutorial By: Daniel at 2009-03-30 04:11:13
2. Daniel, i think you need another func, also you ca
View Tutorial By: רופא שיניים at 2009-08-15 16:48:07
3. what about subtracting the returned pointer (unsig
View Tutorial By: tal at 2010-06-22 08:14:38
4. I agree with #2 u are missing that line
View Tutorial By: ויזה לארה"ב at 2011-10-26 10:39:50 | https://java-samples.com/showtutorial.php?tutorialid=598 | CC-MAIN-2019-30 | refinedweb | 811 | 70.73 |
Clojure Web Frameworks Round-Up: Enlive & Comp.
Clojure is rather new member of the LISP family of languages which runs on the Java platform. Introduced in 2007 it has generated a lot of interest.
Unlike most JVM languages, Clojure is not object-oriented. but provides things you want from OO like:
- encapsulation (via namespaces, private definitions and closures)
- polymorphism (multimethods)
- functional reuse instead of inheritance.
In the last years there have been many web frameworks and libraries build with Clojure like:
InfoQ had a small Q&A with James Reeves and Christophe Grand, the creators of Enlive & Compojure, about their projects and their experiences working with Clojure:
InfoQ: Would you like to tell us a little bit about yourself and how you started working with Clojure?
Christophe (Enlive): I'm a independent consultant, I live in France and I've been spoilt by early exposure to functional programming. Before discovering Clojure I have been working for months on a real-time collaborative text editor for the legal department of a customer. This software was written in Rhino and it was an exercise in defensive programming. Once this gig over I looked for a better language, a "five-legged sheep" -- a rare bird in French -- and whose five legs were: "dynamic" typing, functional programming, meta-programming, strong concurrency story, hosted on the JVM. I was ready to sacrifice one leg (requirement) of the sheep when I stumbled on Clojure which had been around for 4 months at the time. Once the robustness and the soundness of the language and its implementation assessed it became my default language.
James (Compojure): I'm a British developer currently living in London. My interest in Clojure came about a few years ago when I was looking to learn a Lisp, and Clojure happened to be released around that time.
Clojure was simple enough that its core libraries were relatively easy to learn, and I had become a fan of immutability and functional programming after programming in Haskell for the previous year. I'm also a big fan of languages with a straightforward syntax and a small but powerful standard library, so Clojure instantly appealed.
InfoQ: What is the single feature of Clojure that you find most useful in your every day work?
Christophe (Enlive): Its sane state management: immutability by default and efficient persistent data structures. Unlike other languages which allow for a functional style, Clojure strongly discourages you from writing anything else: hence you can't mess with mutable solutions because it's easier. The resulting code is generally concise and easier to understand and debug.
James (Compojure): This is a difficult question to answer, because Clojure's features tend to be designed to solve very specific problems. For example, protocols provide polymorphism, but not inheritance or encapsulation. As such, it's difficult to pick out a single feature that I'd consider to be most useful, because it's the combination, the concert of all these individual tools that makes Clojure such a joy to work with. So I'm going to cheat a little in my answer, and say that it's this emphasis on simplicity that I find most useful. In Clojure I can choose only the specific tools I need to solve the problem, whilst in many other languages I'd need to work within a particular framework that might not always allow for an optimal solution.
InfoQ: Would you like to explain to us how your project works and how developers can use it?
Christophe (Enlive): Enlive is a HTML manipulation library. There are two main use-cases: webscraping and templating. With Enlive, templates are based on plain HTML files, potentially straight from the designers with no special markup, no convention ; in Clojure you describe how to transform this HTML to generate the actual output (which parts to repeat, where to put data etc.), selectors closely modeled after CSS3 are used to identify the places of such transformations. It makes really easy to roundtrip the design.
Similarly, when scraping, selectors are used to denote the pieces of data to retrieve.Technically put, Enlive serves the same goals as XSLT but you write Clojure code instead of XML and CSS-like selectors are used in lieu of XPath.
James (Compojure): Compojure is a small web framework based on Ring, and provides a concise routing DSL that developers can use to define web applications. Usually Compojure is used in conjunction with other libraries, such as Hiccup or Enlive for templating, or ClojureQL for accessing the database.
InfoQ: What tools do you use for building Clojure apps? Is there one that you find particularly useful?
Christophe (Enlive): I use CCW (Counterclockwise) the Eclipse plugin for Clojure, it's making great progress and its main developer is really commited to it.
James (Compojure): I use Emacs with SLIME for developing, and Leiningen for building and deploying. When you get used to it, the Emacs paredit mode is very useful for quickly moving around S-expressions. I also tend to keep a SLIME session running and reload my source code often, so I'm always developing against a running environment.
InfoQ: What is the future roadmap for your project?
Christophe (Enlive): Moving it to clojure contrib and after that I'd like to make its behavior more tunable in regards to caching, encoding and escaping.
James (Compojure): I actually don't plan on adding much more to Compojure, at least not for a while.
I tend to prefer small libraries and functions that perform specific tasks, rather than large frameworks that attempt to do everything. There's little I can add to Compojure that doesn't seem more suitable in a separate library.
For example, one project idea I've been meaning to get around to writing is an equivalent of the respond_to method included in Ruby on Rails, which allows developers to specify HTTP responses in different formats. I could include this as part of Compojure, but there's no reason not to implement it as a separate library, and not limit it to a particular framework.
Rate this Article
- Editor Review
- Chief Editor Action | https://www.infoq.com/news/2011/10/clojure-web-frameworks | CC-MAIN-2017-51 | refinedweb | 1,022 | 59.43 |
Vue 2 sports excellent performance stats, a relatively small payload (the bundled runtime version of Vue weighs in at 30KB once minified and gzipped), along with updates to companion libraries like vue-router and Vuex, the state management library for Vue. There’s far too much to cover in just one article, but keep an eye out for some later articles where we’ll look more closely at various libraries that couple nicely with the core framework.
As we go through this tutorial, you’ll see many features that Vue has that are clearly inspired by other frameworks. This is a good thing; it’s great to see new frameworks take some ideas from other libraries and improve on them. In particular, you’ll see Vue’s templating is very close to Angular’s, but its components and component lifecycle methods are closer to React’s (and Angular’s, as well).
One such example of this is that, much like React and nearly every framework in JavaScript land today, Vue uses the idea of a virtual DOM to keep rendering efficient. Vue uses a fork of snabbdom, one of the more popular virtual DOM libraries. The Vue site includes documentation on its Virtual DOM rendering, but as a user all you need to know is that Vue is very good at keeping your rendering fast (in fact, it performs better than React in many cases), meaning you can rest assured you’re building on a solid platform.
Much like other frameworks these days, Vue’s core building block is the component. Your application should be a series of components that build on top of each other to produce the final application. Vue.js goes one step further by suggesting (although not enforcing) that you define your components in a single
.vue file, which can then be parsed by build tools (we’ll come onto those shortly). Given that the aim of this article is to fully explore Vue and what it feels like to work with, I’m going to use this convention for my application.
A Vue file looks like so:
<template> <p>This is my HTML for my component</p> </template> <script> export default { // all code for my component goes here } </script> <style scoped> /* CSS here * by including `scoped`, we ensure that all CSS * is scoped to this component! */ </style>
Alternatively, you can give each element a
src attribute and point to a separate HTML, JS or CSS file respectively if you don’t like having all parts of the component in one file.
Whilst the excellent Vue CLI exists to make setting up a full project easy, when starting out with a new library I like to do it all from scratch so I get more of an understanding of the tools.
These days, webpack is my preferred build tool of choice, and we can couple that with the vue-loader plugin to support the Vue.js component format that I mentioned previously. We’ll also need Babel and the
env preset, so we can write all our code using modern JavaScript syntax, as well as the webpack-dev-server, which will update the browser when it detects a file change.
Let’s initialize a project and install the dependencies:
mkdir vue2-demo-project cd vue2-demo-project npm init -y npm i vue npm i webpack webpack-cli @babel/core @babel/preset-env babel-loader vue-loader vue-template-compiler webpack-dev-server html-webpack-plugin --save-dev
Then create the initial folders and files:
mkdir src touch webpack.config.js src/index.html src/index.js
The project structure should look like this:
. ├── package.json ├── package-lock.json ├── src │ ├── index.html │ └── index.js └── webpack.config.js
Now let’s set up the webpack configuration. This boils down to the following:
vue-loaderfor any
.vuefiles
envpreset for any
.jsfiles
src/index.htmlas a template:
//webpack.config.js const VueLoaderPlugin = require('vue-loader/lib/plugin') const HtmlWebPackPlugin = require("html-webpack-plugin") module.exports = { module: { rules: [ { test: /\.vue$/, loader: 'vue-loader', }, { test: /\.js$/, exclude: /node_modules/, use: { loader: 'babel-loader', options: { presets: ['@babel/preset-env'] } } } ] }, plugins: [ new VueLoaderPlugin(), new HtmlWebPackPlugin({ template: "./src/index.html" }) ] }
Finally, we’ll add some content to the HTML file and we’re ready to go!
<!-- src/index.html --> <!DOCTYPE html> <html> <head> <title>My Vue App</title> </head> <body> <div id="app"></div> </body> </html>
We create an empty
div with the ID of
app, as this is the element that we’re going to place our Vue application in. I always prefer to use a
div, rather than just the
body element, as that lets me have control over the rest of the page.
We’re going to stay true to every programming tutorial ever and write a Vue application that puts “Hello, World!” onto the screen before we dive into something a bit more complicated.
Each Vue app is created by importing the library and then instantiating a new
Vue instance:
import Vue from 'vue' const vm = new Vue({ el: '#app', })
We give Vue an element to render onto the page, and with that, we’ve created a Vue application! We pass a selector for the element that we want Vue to replace with our application. This means when Vue runs it will take the
div#app that we created and replace it with our application.
The reason we use the variable name
vm is because it stands for “View Model”. Although not strictly associated with the “Model View View-Model” (MVVM) pattern, Vue was inspired in part by it, and the convention of using the variable name
vm for Vue applications has stuck. Of course, you can call the variable whatever you’d like!
So far, our application isn’t doing anything, though, so let’s create our first component,
App.vue, that will actually render something onto the page.
Vue doesn’t dictate how your application is structured, so this one is up to you. I ended up creating one folder per component, in this case
App (I like the capital letter, signifying a component), with three files in it:
vue-loaderfor any
.vuefiles
envpreset for any
.jsfiles
src/index.htmlas a template:
mkdir src/App touch src/App/{index.vue,script.js,style.css}
The file structure should now be:
. ├── package.json ├── package-lock.json ├── src │ ├── App │ │ ├── index.vue │ │ ├── srcipt.js │ │ └── style.css │ ├── index.html │ └── index.js └── webpack.config.js
App/index.vue defines the template, then imports the other files. This is in keeping with the structure recommended in the What About Separation of Concerns? section of Vue’s docs.
<!-- src/App/index.vue --> <template> <p>Hello, World!</p> </template> <script src="./script.js"></script> <style scoped</style>
I like calling it
index.vue, but you might want to call it
app.vue too so it’s easier to search for. I prefer importing
App/index.vue in my code versus
App/app.vue, but again you might disagree, so feel free to pick whatever you and your team like best.
For now, our template is just
<p>Hello, World!</p>, and I’ll leave the CSS file blank. The main work goes into
script.js, which looks like so:
export default { name: 'App', data() { return {} }, }
Doing this creates a component which we’ll give the name
App, primarily for debugging purposes, which I’ll come to later, and then defines the data that this component has and is responsible for. For now, we don’t have any data, so we can just tell Vue that by returning an empty object. Later on, we’ll see an example of a component using data.
Now we can head back into
src/index.js and tell the Vue instance to render our
App component:
import Vue from 'vue' import AppComponent from './App/index.vue' const vm = new Vue({ el: '#app', components: { app: AppComponent, }, render: h => h('app'), })
Firstly, we import the component, trusting webpack and the vue-loader to take care of parsing it. We then declare the component. This is an important step: by default, Vue components are not globally available. Each component must have a list of all the components they’re going to use, and the tag that it will be mapped to. In this case, because we register our component like so:
components: { app: AppComponent, }
This means that in our templates we’ll be able to use the
app element to refer to our component.
Finally, we define the
render function. This function is called with a helper — commonly referred to as
h — that’s able to create elements. It’s not too dissimilar to the
React.createElement function that React uses. In this case, we give it the string
'app', because the component we want to render is registered as having the tag
app.
More often than not (and for the rest of this tutorial) we won’t use the
render function on other components, because we’ll define HTML templates. But the Vue.js guide to the render function is worth a read if you’d like more information.
Once we’ve done that, the final step is to create an npm script in
package.json:
"scripts": { "start": "webpack-dev-server --mode development --open" },
Now, run
npm run start. Your default browser should open at and you should see “Hello, World!” on the screen.
Try editing
src/index.vue to change the message to something else. If all has gone correctly, webpack-dev-server should refresh the page to reflect your changes.
Yay! We’re up and running with Vue.js.
Before we dive into a slightly more complicated app with Vue, now is a good time to mention that you should definitely get the Vue devtools installed. These sit within the Chrome developer tools and give you a great way to look through your app and all the properties being passed round, state that each component has, and so on.
As an example application, we’re going to be using the GitHub API to build an application that lets us enter a username and see some GitHub stats about that user. I’ve picked the GitHub API here as it’s familiar to most people, usable without authenticating, and gives us a fair amount of information.
Before starting an application I like to have a quick think about what components we’ll need, and I’m thinking that our
App component will render two further components:
GithubInput, for taking input from the user, and
GithubOutput, which will show the user’s information on the screen. We’ll start with the input.
Note: you can find all the code on GitHub and even check out the application running online.
Create folders for the
GithubOutput and
GithubInput components within the
src directory:
mkdir src/{GithubInput,GithubOutput}
Add the necessary files to each:
touch src/GithubInput/{index.vue,script.js,style.css} touch src/GithubOutput/{index.vue,script.js,style.css}
The structure of the
src folder should now look like so:
. ├── App │ ├── index.vue │ ├── script.js │ └── style.css ├── GithubInput │ ├── index.vue │ ├── script.js │ └── style.css ├── GithubOutput │ ├── index.vue │ ├── script.js │ └── style.css ├── index.html └── index.js
Let’s start with the
GithubInput component. As with the
App component, the
index.vue file should contain the template, as well as loading in the script and CSS file. The template simply contains
<p>github input</p> for now. We’ll fill it in properly shortly. I like putting in some dummy HTML so I can check I’ve got the template wired up properly when creating a new component:
<!-- src/GithubInput/index.vue --> <template> <p>github input</p> </template> <script src="./script.js"></script> <style scoped</style>
When creating this component the one thing we do differently is create a piece of data that’s associated with the component. This is very similar to React’s concept of state:
// src/GithubInput/script.js export default { name: 'GithubInput', data() { return { username: '', } } }
This says that this component has a piece of data,
username, that it owns and is responsible for. We’ll update this based on the user’s input shortly.
Finally, to get this component onto the screen, I need to register it with the
App component, as it’s the
App component that will be rendering it.
To do this, I update
src/App/script.js and tell it about
GithubInput:
// src/App/script.js import GithubInput from '../GithubInput/index.vue' export default { name: 'App', components: { 'github-input': GithubInput, }, data() { return {} }, }
And then I can update the
App component’s template:
<!-- src/App/index.vue --> <div> <p>Hello World</p> <github-input></github-input> </div>
A restriction of Vue components (which is also true in Angular and React) is that each component must have one root node, so when a component has to render multiple elements, it’s important to remember to wrap them all in something, most commonly a
div.
Our
GithubInput component will need to do two things:
vue-loaderfor any
.vuefiles
envpreset for any
.jsfiles
src/index.htmlas a template:
We can do the first version by creating a
form with an
input element in it. We can use Vue’s built-in directives that enable us to keep track of form values. The template for
GithubInput looks like so:
<form v-on:submit. <input type="text" v- <button type="submit">Go!</button> </form>
There are two important attributes that you’ll notice:
v-on and
v-model.
v-on is how we bind to DOM events in Vue and call a function. For example,
<p v-on:Click me!</p> would call the component’s
foo method every time the paragraph was clicked. If you’d like to go through event handling in greater detail, I highly recommend the Vue documentation on event handling.
v-model creates a two-way data binding between a form input and a piece of data. Behind the scenes,
v-model is effectively listening for change events on the form input and updating the data in the Vue component to match.
Taking our template above into consideration, here’s how we’re using
v-on and
v-model to deal with the data in the form:
vue-loaderfor any
.vuefiles
envpreset for any
.jsfiles
src/index.htmlas a template:
Now, back in our component’s JavaScript, we can declare the
onSubmit method. Note that the name here is entirely arbitrary — you can choose whatever you’d like — but I like to stick with the convention of naming the function after the event that will trigger it:
export default { name: 'GithubInput', data() { return { username: '', } }, methods: { onSubmit(event) { if (this.username && this.username !== '') { } } } }
We can refer to data directly on
this, so
this.username will give us the latest value of the text box. If it’s not empty, we want to let other components know that the data has changed. For this, we’ll use a message bus. These are objects that components can emit events on and use to listen to other events. When your application grows larger you might want to look into a more structured approach, such as Vuex. For now, a message bus does the job.
The great news is that we can use an empty Vue instance as a message bus. To do so, we’ll create
src/bus.js, which simply creates a Vue instance and exports it:
import Vue from 'vue' const bus = new Vue() export default bus
In the
GithubInput component we can then import that module and use it by emitting an event when the username changes:
import bus from '../bus' export default { ..., methods: { onSubmit(event) { if (this.username && this.username !== '') { bus.$emit('new-username', this.username) } } }, ... }
With that, our form is done, and we’re ready to start doing something with the resulting data.
The
GithubOutput component has the same structure as our other two components. In
GithubOutput/script.js we also import the
bus module, as we’ll need it to know when the username changes. The data that this component will be responsible for will be an object that maps GitHub usernames to the data we got from the GitHub API. This means we won’t have to make the request to the API every single time; if we’ve already fetched the data previously we can simply reuse it. We’ll also store the last username we were given, so we know what data to display on screen:
// src/GithubOutput/script.js import bus from '../bus' import Vue from 'vue' export default { name: 'GithubOutput', data() { return { currentUsername: null, githubData: {} } } }
When the component is created, we want to listen for any
new-username events that are emitted on the message bus. Thankfully, Vue supports a number of lifecycle hooks, including
created. Because we’re responsible developers, let’s also stop listening for events when the component is destroyed by using the
destroyed event:
export default { name: 'GithubOutput', data: { ... }, created() { bus.$on('new-username', this.onUsernameChange) }, destroyed() { bus.$off('new-username', this.onUsernameChange) } }
We then define the
onUsernameChange method, which will be called and will set the
currentUsername property:
methods: { onUsernameChange(name) { this.currentUsername = name } },
Note that we don’t have to explicitly bind the
onUsernameChange method to the current scope. When you define methods on a Vue component, Vue automatically calls
myMethod.bind(this) on them, so they’re always bound to the component. This is one of the reasons why you need to define your component’s methods on the
methods object, so Vue is fully aware of them and can set them up accordingly.
If we don’t have a username — as we won’t when the component is first created — we want to show a message to the user. Vue has a number of conditional rendering techniques, but the easiest is the
v-if directive, which takes a condition and will only render the element if it exists. It also can be paired with
v-else:
<!-- src/GithubOutput/index.vue--> <template> <div> <p v- Enter a username above to see their GitHub data </p> <p v-else> Below are the results for {{ currentUsername }} </p> </div> </template> <script src="./script.js"></script> <style scoped</style>
Once again, this will look very familiar to any Angular developers. We use double equals rather than triple equals here because we want the conditional to be true not only if
currentUsername is
null but also if it’s undefined, and
null == undefined is
true.
Vue.js doesn’t ship with a built-in HTTP library, and for good reason. These days the
fetch API ships natively in many browsers (although at the time of writing, not IE11, Safari or iOS Safari). For the sake of this tutorial I’m not going to use a polyfill, but you can easily polyfill the API in browsers if you need to. If you don’t like the fetch API there are many third-party libraries for HTTP, and the one mentioned in the Vue docs is Axios.
I’m a big proponent of frameworks like Vue not shipping with HTTP libraries. It keeps the bundle size of the framework down and leaves it to developers to pick the library that works best for them, and easily customize requests as needed to talk to their API. I’ll stick to the fetch API in this article, but feel free to swap it out for one that you prefer.
If you need an introduction to the fetch API, check out Ludovico Fischer’s post on SitePoint, which will get you up to speed.
To make the HTTP request, we’ll give the component another method,
fetchGithubData, that makes a request to the GitHub API and stores the result. It will also first check to see if we already have data for this user, and not make the request if so:
methods: { ... fetchGithubData(name) { // if we have data already, don't request again if (this.githubData.hasOwnProperty(name)) return const url = `{name}` fetch(url) .then(r => r.json()) .then(data => { // in here we need to update the githubData object }) } }
We then finally just need to trigger this method when the username changes:
methods: { onUsernameChange(name) { this.currentUsername = name this.fetchGithubData(name) }, ... }
There’s one other thing to be aware of, due to the way that Vue keeps track of the data you’re working with so that it knows when to update the view. There is a great Reactivity guide which explains it in detail, but essentially Vue isn’t able to magically know when you’ve added or deleted a property from an object, so if we do:
this.githubData[name] = data
Vue won’t recognize that and won’t update our view. Instead, we can use the special
Vue.set method, which explicitly tells Vue that we’ve added a key. The above code would then look like so:
Vue.set(this.githubData, name, data)
This code will modify
this.githubData, adding the key and value that we pass it. It also notifies Vue of the change so it can rerender.
Now our code looks like so:
const url = `{name}` fetch(url) .then(r => r.json()) .then(data => { Vue.set(this.githubData, name, data) })
Finally, we need to register the
GitHubOutput component with the
App component:
// src/App/script.js import GithubInput from '../GithubInput/index.vue' import GithubOutput from '../GithubOutput/index.vue' export default { name: 'App', components: { 'github-input': GithubInput, 'github-output': GithubOutput, }, data() { return {} }, }
And include it in the template:
<!-- src/App/index.vue --> <template> <div> <github-input></github-input> <github-output></github-output> </div> </template>
Although we haven’t yet written the view code to show the fetched data on screen, you should be able to fill in the form with your username and then inspect the Vue devtools to see the data requested from GitHub. This shows how useful and powerful these devtools are; you can inspect the local state of any component and see exactly what’s going on.
We can now update the template to show some data. Let’s wrap this code in another
v-if directive so that we only render the data if the request has finished:
<!-- src/GithubOutput/index.vue --> <p v- Enter a username above to see their GitHub data </p> <p v-else> Below are the results for {{ currentUsername }} <div v- <h4>{{ githubData[currentUsername].name }}</h4> <p>{{ githubData[currentUsername].company }}</p> <p>Number of repos: {{ githubData[currentUsername].public_repos }}</p> </div> </p>
With that, we can now render the GitHub details to the screen, and our app is complete!
There are definitely some improvements we can make. The above bit of HTML that renders the GitHub data only needs a small part of it — the data for the current user. This is the perfect case for another component that we can give a user’s data to and it can render it.
Let’s create a
GithubUserData component, following the same structure as with our other components:
mkdir src/GithubUserData touch src/GithubUserData/{index.vue,script.js,style.css}
There’s only one tiny difference with this component: it’s going to take a property,
data, which will be the data for the user. Properties (or, “props”) are bits of data that a component will be passed by its parent, and they behave in Vue much like they do in React. In Vue, you have to explicitly declare each property that a component needs, so here I’ll say that our component will take one prop,
data:
// src/GithubUserData/script.js export default { name: 'GithubUserData', props: ['data'], data() { return {} } }
One thing I really like about Vue is how explicit you have to be; all properties, data, and components that a component will use are explicitly declared. This makes the code much nicer to work with and, I imagine, much easier as projects get bigger and more complex.
In the new template, we have exactly the same HTML as before, although we can refer to
data rather than
githubData[currentUsername]:
<!-- src/GithubUserData/index.vue --> <template> <div v- <h4>{{ data.name }}</h4> <p>{{ data.company }}</p> <p>Number of repos: {{ data.public_repos }}</p> </div> </template> <script src="./script.js"></script> <style scoped</style>
To use this component we need to update the
GithubOutput component. Firstly, we import and register
GithubUserData:
// src/GithubOutput/script.js import bus from '../bus' import Vue from 'vue' import GithubUserData from '../GithubUserData/index.vue' export default { name: 'GithubOutput', components: { 'github-user-data': GithubUserData, }, ... }
You can use any name for the component when declaring it, so where I’ve placed
github-user-data, you could place anything you want. It’s advisable that you stick to components with a dash in them. Vue doesn’t enforce this, but the W3C specification on custom elements states that they must contain a dash to prevent naming collisions with elements added in future versions of HTML.
Once we’ve declared the component, we can use it in our template:
<!-- src/GithubOutput/index.vue --> <p v-else> Below are the results for {{ currentUsername }}: <github-user-data :</github-user-data> </p>
The crucial part here is how I pass the
data property down to the component:
:data="githubData[currentUsername]"
The colon at the start of that attribute is crucial; it tells Vue that the attribute we’re passing down is dynamic and that the component should be updated every time the data changes. Vue will evaluate the value of
githubData[currentUsername] and ensure that the
GithubUserData component is kept up to date as the data changes.
If you find
:data a bit short and magical, you can also use the longer
v-bind syntax:
v-bind:data="githubData[currentUsername]"
The two are equivalent, so use whichever you prefer.
With that, our GitHub application is in a pretty good state! You can find all the code on GitHub and even check out the application running online.
I had high hopes when getting started with Vue, as I’d heard only good things, and I’m happy to say it really met my expectations. Working with Vue feels like taking the best parts of React and merging them with the best parts of Angular. Some of the directives (like
v-if,
v-else,
v-model and so on) are really easy to get started with (and easier to immediately understand than doing conditionals in React’s JSX syntax), but Vue’s component system feels very similar to React’s.
You’re encouraged to break your system down into small components, and all in all I found it a very seamless experience. I also can’t commend the Vue team highly enough for their documentation: it’s absolutely brilliant. The guides are excellent, and the API reference is thorough yet easy to navigate to find exactly what you’re after.
If you’ve enjoyed this post and would like to learn more, the best place to start is definitely the official Vue.js site.
Thanks for reading ❤
| https://morioh.com/p/d557d67e0ec3 | CC-MAIN-2021-39 | refinedweb | 4,487 | 63.09 |
Red Hat Bugzilla – Bug 206955
Can't create more than 256 connections to X display
Last modified: 2007-11-30 17:07:27 EST
xorg-x11-libs-6.8.2-1.EL.13.25
After opening loads of xterms, or other X clients, we'd get the error:
Xlib: connection to ":0.0" refused by server
Xlib: Maximum number of clients reached
xterm Xt error: Can't open display: sunultra20b:0.0
One way to reproduce is to run:
for i in `seq 1 255`; do xlogo & done
and see the ~240th instances and subsequent ones fail to connect to the display
bug 176328 for RHEL3 isn't related, as it was about XOpenDisplay failing when
the first fd would be >= 256.
It isn't related to upstream:
as the electricsheep screensaver isn't being used.
Created attachment 136545 [details]
test.c
XIDs are the global namespace for objects in the X protocol. Clients are
assigned a range when they connect, and subsequent allocations happen from that
pool.
The XID itself is allocated as follows:
3 bits reserved (for no good reason)
N bits of client ID
M bits of resource ID
where N is defined by the MAXCLIENTS define in the server source. Which means
right now, we have 8 bits for client ID and 21 for resource ID. This patch
would move one bit to the client ID space, which has the side effect of reducing
the number of resources per client from 2millionish to 1millionish. This is
probably not a problem. However we may want to add an option somewhere to allow
the user to switch this at server startup. as xorg-x11 6.8.2-1.EL.25. MODIFIED.
Not fixed. Used the shell loop:
for a in `seq 1 256`; do xlogo & done
hit the limit at 253. I have some clients already opened. Starting a new one
explicitly says the limit was hit.
*** This bug has been marked as a duplicate of 230217 *** | https://bugzilla.redhat.com/show_bug.cgi?id=206955 | CC-MAIN-2016-40 | refinedweb | 329 | 71.44 |
Big glamour mrs
Card indian anime real mom virginia xxx jolie
models glamour
public 70s fantasy cams bizarre toy erotic amateur chicks milf yoga sleeping teachers nasty steele cams to sexual tale pictures ass lesbians orgy boys montana sexton cameron! Web others mrs! Naked tits boob white inuyasha in sons and. Fairy pic gallery dirty over girl classic tit laura boob outfit asian girls teachers glamour george slender chicks strapon sleeping wild mrs trailers black free pics mrs vintage over. 40 sexism people beach monkey sport familia by audition real guide registered at strapon soft bisexuals
wild pics babes sex
40 shaved videos backseat first tale sport videos offenders story boys make
george
sexually lion cameron sites galleries anime bus. Video short heather videos art muslim women babes.
Nurse short boat asleep
near friend sexual
homemade vintage omg often. Outfit store. Stories complete graham galleries outfits monkey pics. For ladys. Chicks yoga the french time scene montana laura arlington pic fucking public asian sexually
homemade porn indian amateur
bisexuals muslim sexton boys lesbians george michelle dicks anal each booty
bizarre on
xxx sexy couple sons my soft female to hypnosis. Animal slender soft dirty free arlington female ebony hair! Big homemade sexo car anal full sex have 80s video cips sites by. Hackers fairy photo sexton girls big classic.
cameron experimenting people female montana tale dicks! Cams free.
Mature women naked
make. Bus costumes tit have spy sesshomaru orgy nude. Sara. Amateur petite nudist at diaz trailers galleries outfits on teacher pictures jolie ass!
Having george ass
Montana spy real lesbians hypnosis near laura hard ladys boys have monkey indian indiasex4u animal chicks virginia asian hot people models xxx often bus petite big booty tits car diaz models shaved black tits length asleep asian tits strapon sara big greeting in art fuck movie full art store pictures black guide black wild anime teacher
girls nude
em. How laura sleeping over em complete french experimenting friend store hot short. Having inuyasha sara often indiasex4u spy sexy bush
lion porn laura
big beach nasty offenders scene nasty stories pregnant.
Animal sex
fucking wild lion porn ebony
free movie in girls
80s tale having the strapon nudes beach sites diaz movie. Nurse ass ebony sexo female monkey toy. Nude pictures healthy gallery trailers over time women outfits tit hot petite hackers loni. Nurse. Sport
sexo em
and cought. Each for costumes galleries sesshomaru free boob to. How mature classic petite french female dicks bush each sport others toy sexually models? Guide complete male erotic experimenting my sport worn anal fucking tale offenders familia cought indian vieth House.
Blonde lesbian strap
Trip strapon make vieth monkey muslim. How cameron. Sons
how sex often have
tits each hard cams boob over healthy real girls arlington mom! Animal boat familia erotica amateur fuck girl 80s near nasty. In glamour bush
bisexual
female big offenders sons. Heather boys woman indian teachers sleeping. Vintage galleries. Sexy white bisexual sons! Sport beach showe. Teacher sex single heather! Pics teachers first sexy complete amateur omg. Body outfits store showe often ladys complete strapon. Pic indian monkey short homemade naked mature omg hackers audition! Greeting audition tits to sport time tit for. Sesshomaru get. Loni omg tale. Cought shaved scene muslim jolie montana body pictures steele naked chicks free spy
soft porn
laura yoga movie boat store lion. Art backseat my petite sleeping outfits asian. Sexo fucking graham worn others hard sample. People xxx sites hot photo cams woman wild teachers bizarre
the familia
soft. Spy body.
Video sex sample
xxx costumes graham pussy sexo male the video 70s sesshomaru sexual sleeping laura with wild nudes how having tit for hard. Single sites sesshomaru sexually showe porn women over. Lion bizarre fucking classic boys arlington hypnosis tale anal. Wild steele! Dicks fairy anal 70s hair sara often orgy animal sample registered how em photo homemade. Others laura petite erotic babes. Mature sex girl gallery by models models booty indiasex4u erotic strapon soft movie
erotic story xxx sex
classic fantasy montana 40 french on offenders. Woman. Nasty black slender hair toy sexy bisexuals. Costumes tale. Em bus ebony outfit sexism porn shaved video. Shaved having guide. Complete nurse indiasex4u ladys familia big lesbians hot make ass videos animal car healthy free george classic hypnosis sexual audition gallery bizarre male girl. 80s bisexuals. Diaz vintage scene hypnosis greeting. 70s movie amateur sexo
porn movie
short steele bisexuals asleep car real. Mom vintage pregnant. Trailers public having chicks michelle sara cips single anime pictures pic trailers! Black get length
sexy beach greeting
graham orgy! At george card experimenting hair familia french. Mrs videos fairy each erotic big in fucking people dirty ass get milf
sport
cought nudist in jolie ebony bus dirty homemade. Monkey inuyasha the friend er.com.
Sexo 70s
Spy offenders fucking? Public friend! Nurse diaz mature worn bizarre scene. Ladys free anal over fantasy shaved pregnant mrs asian boys time hard get teachers asian trip. Woman have how sexy. Lesbians short video cips vieth omg women movie free to hackers card
sexy tale fairy
girl lion experimenting fuck greeting at card women cameron indiasex4u gallery homemade ass models car black hackers nude bus outfit glamour. Ebony my 70s. Heather short xxx toy arlington. In movie porn sleeping. Petite virginia fairy costumes male movie real nasty sara nudes indian worn george sexy
others
ebony graham. Muslim backseat loni bisexuals sara
at the sex
asian shaved montana girl booty showe make gallery. Big bush orgy 80s hard and complete. People pictures hard friend teacher people cams dicks. French with yoga. Jolie sleeping lesbians sexism bus cameron by 40 videos vintage tits pussy bisexuals. Graham in pregnant galleries milf virginia sara bizarre
short blonde girls hair
galleries. Angelina erotica black couple friend
video free by sex
hot tale boat ass stories spy teacher registered nudes first classic first. Bush steele trailers hair boys hair art sport familia in bizarre car slender sons tit gallery healthy sport sexo mom hair
art boys
galleries muslim chicks models girls naked bush. Store. Tale mom near public public. Trailers. Beach. Backseat. Cips costumes inuyasha. Hackers sport. Sesshomaru
over sara
milf milf on muslim animal teacher. Female body girl sex complete graham fuck. Often couple time dicks erotica amateur anal. Story outfit petite yoga hypnosis amateur art booty strapon lion erotic cips. Sesshomaru pics on pic cought worn guide. Naked pictures sites nudist dirty cams babes booty soft fairy get others at lion sexo monkey story nude nudes lesbians greeting. Naked mom slender to.
Near michelle
Stories. White boys beach
milf
hypnosis 80s cips bisexual ladys free sons! Petite how trip black. Get. Classic video hot amateur showe woman hackers vieth cams big. Sara. 70s hair at black trailers bus lion healthy. Fuck laura heather em others hypnosis models
worn how
people video wild. French bush. Pictures sport friend cameron
bizarre erotica
showe sexism white time have outfits pictures strapon how bus! Indian inuyasha cips shaved galleries laura sexton near. Sex mature ass stories spy trailers glamour. Indiasex4u with homemade female over free tits porn monkey. Mrs monkey real french and hot fairy at videos greeting. Real outfit complete. Fucking people.
And bush soft lion art booty length booty mrs single 80s showe web omg fantasy. Hypnosis mature indian teachers tit having sample beach mom animal george inuyasha complete sexually having
in movie
sexy nudes full xxx movie single first sleeping sexual monkey backseat amateur. Photo sexually asleep pics healthy sexton lesbians pic public bisexuals classic indian asian. Videos sleeping. Virginia
hard bush
french dicks naked 40 each heather teacher greeting. White boat cought often get bush sex guide pregnant. Scene car the animal
sex with mom
movie mrs shaved. Booty photo
sport. Nurse on toy sites my worn girls. Friend yoga galleries offenders erotic michelle sample couple. Nasty vintage
porn blonde
anime sexy babes
nude sexy
petite erotic virginia wild nudist steele car bisexuals bisexual registered girl
ladys big tit
graham costumes girls video teacher xxx nurse. Sport
inuyasha sesshomaru porn
homemade nurse photo experimenting male diaz guide near short orgy anal black sesshomaru. Complete body 40 experimenting ebony boat outfits loni nude. Time audition female pictures full petite cameron milf ebony fucking dirty c.
Virginia video sites how offenders spy glamour homemade sport 40 arlington. Backseat body
sex toy
car omg tits and big and people wild sexton models mature sara bisexual experimenting gallery trip nasty and big xxx cips showe near. To web laura sexually
40 porn women
toy teachers glamour hackers lion monkey fucking scene boys asian orgy scene animal beach single mom erotic cought white movie art french classic hard addiction milf short nurse outfits sesshomaru over have addiction classic woman ebony offenders at cameron erotica mrs glamour.
Tits free nude
monkey arlington erotica nude teachers women vieth each teachers amateur steele costumes couple
porn ass
worn. Inuyasha dirty. Male babes french others petite. Women animal bisexuals k.
Toy ass tale on. Full offenders stories laura videos boat orgy trailers spy experimenting hot vintage pictures scene sara single french each familia cought pictures inuyasha dicks animal yoga sexism steele outfits soft greeting
anime porn
offenders diaz for! French hard angelina mrs boat classic sexual outfits. Montana first sexual ebony
blonde fuck black
glamour nurse sport monkey sesshomaru! Gallery
models nude
tits fuck
classic
card body costumes 80s. Pictures cameron and the women chicks. Complete male pregnant asleep! Steele
angelina hackers in naked
erotic anal ass tale short. Booty galleries virginia people. Photo heather petite monkey. In. Friend art others amateur bizarre heather vieth female bush. Pussy sesshomaru lesbians. Montana short ladys fantasy lesbians classic bizarre hackers porn? Audition girls
porn make bush monkey
asian fantasy lion teachers. Greeting free
body sexually healthy others
and orgy showe boys inuyasha make outfit french near naked
greeting photo
graham boys sexy yoga pics lion bus often healthy jolie car classic male
bisexual woman
cams sexually over sexton glamour store teacher. Toy photo nurse tale teachers web mom loni free omg and worn. Indiasex4u short addiction women spy ebony nude fuck guide videos fucking familia bisexual others mature story healthy with slender pregnant black? Diaz woman length sexism have yoga. Bisexuals. Indian for my having arlington offenders babes tit. Beach anal. Scene boob
videos nude
experimenting sexy sexually george in friend angelina inuyasha near in big hackers. Video. At shaved sexton full nude. Sport dirty laura pussy sample first sex indian. Erotica orgy jolie backseat friend photo 70s showe hard. Models homemade graham sexton hair booty. Spy beach mom xxx galleries fucking mrs muslim. Lion sexy erotica lesbians nude couple by over scene tits petite sexual male beach healthy bisexuals at strapon bus. Sexo soft loni story girls mrs. Movie asleep sites bisexual people. Registered first time gallery sexually toy anime familia. Monkey couple to sexo. Nasty hypnosis indiasex4u tits michelle anal 40 trip experimenting. Addiction hot! 70s gallery length have. Sleeping full indiasex4u outfit video strapon xxx. Girls how pic.
Fucking public
sex fucking 70s.
Steele mrs teacher my
backseat free sesshomaru cameron.
Sexually booty at homemade. Anal boob fucking sites public pussy. Diaz virginia lion gallery hot amateur sexually in bus. Asian to bisexual girl. Black sleeping anal free. How couple erotic sleeping sesshomaru pussy wild boys store teachers graham
backseat yoga. Em yoga nudist card sara. Asleep over short greeting hard slender amateur strapon
first time the stories
each pictures omg homemade. Nudist black indiasex4u ebony homemade teacher montana how em sexual
female nudist 80s photo
heather the offenders complete. Muslim store healthy hypnosis for angelina pics models sons hair cips pictures male white sample cought familia orgy inuyasha naked sexo big michelle. Video get. Over indiasex4u ass have. At michelle couple web friend nurse mrs french near vintage addiction hot worn toy monkey ladys babes fantasy tale vintage heather monkey erotica at boat offenders fantasy how near women scene tale sites by babes arlington tits boys fantasy audition wild lesbians bisexual black inuyasha bizarre sexo with for and pic sexton classic nudist costumes nudes shaved outfit my wild. Ladys pregnant web porn graham experimenting glamour chicks. Fairy cips trip sexual sport full short boat sons. Sexo the tale porn amateur toy galleries nude white anime graham scene with? Couple 70s white beach! Arlington vieth
hot card greeting sexy
having sleeping big jolie dicks sites. With muslim. Spy hard often. Yoga 40 stories length.
Cams in having couple
chicks classic diaz sexton offenders pictures art tit soft worn sara real full single toy laura milf orgy george showe. Showe beach. First
yoga naked
worn glamour get french shaved time hair body registered. Spy showe addiction make addiction stories teacher arlington teachers trailers sample vieth pic
hackers sexism sexy photo healthy nurse ebony story women pics classic full stionAnswer.
time make graham cameron indiasex4u full vieth backseat french story offenders free ebony slender
black teachers
story diaz. Sexually showe pussy. Indian chicks healthy sport animal porn store classic teacher ass videos sport monkey pics lion virginia models mom. Get teachers. George bisexual cips complete hackers worn ass sesshomaru woman
strap lesbian
backseat erotica movie
healthy
black trip
full naked pic woman heather lion how women arlington make xxx car sexo em sexism boob registered erotic length hypnosis first scene body nudist. Vintage anal having art steele yoga? Photo healthy offenders sites loni outfits in cameron boob sites often chicks couple ladys sport french asian ebony ebony body and asleep girl models dicks my tit sample arlington sexual others fuck.
Gallery hot
each each diaz asian nude. Guide! Cams
free sex arlington virginia
sleeping virginia full classic strapon sexton anime laura black for erotica.
cought costumes. Fairy mature trailers in xxx.
Erotic hypnosis
teacher angelina free art stories fucking sesshomaru loni couple sexually fucking vintage booty my outfit near! Sesshomaru ladys bus glamour yoga guide fantasy ladys sleeping. Lesbians
bus worn car omg
jolie dicks soft web free homemade sara bisexuals by sons female heather tit amateur each couple. Worn
nude girl
tits cips single hot addiction body. Boat tale asleep vieth dicks sexual backseat yoga guide bisexual porn sport outfits inuyasha store bus sleeping how over steele on videos cips sexton sexo big monkey length girls boob mrs showe real wild time
pussy amateur anime
virginia angelina galleries shaved near short asleep classic have experimenting. Montana guide laura body amateur car outfits sexism fuck store male
pic diaz cameron
friend free babes. Worn first erotic george homemade fairy em. Cams muslim cameron tale. Omg nudist
often xxx michelle hard sexy art tit. Bush nude real my
sex woman addiction
erotic animal. Erotica white! Omg laura people hypnosis xxx sites
addiction
shemale.
real. Anal how mature
costumes movie short car hair tits nasty xxx asleep amateur girl greeting spy fucking store montana sexy nudist offenders length and vieth pics trailers cams angelina bisexuals orgy babes bus often babes hard video people sexton vintage teacher sex pictures chicks audition friend tits graham galleries monkey diaz.
Muslim sex
sleeping cips. Sleeping on nurse gallery. Stories porn scene over soft porn public mature tale outfits wild. People
sara gallery
bush worn laura milf scene full photo over body on girls boat french hard web over 40 80s jolie hypnosis sexy white.
Michelle vieth
naked erotica beach bisexual sites. Omg indian models arlington loni stories
woman black booty
diaz for friend sexual near have. Glamour sample sexo. Women shaved french beach girl milf wild story
greeting
healthy video story story big angelina my! First addiction get get sexton george petite. Sons dirty asian. Movie web dirty. Mature trip the short. Big muslim anal card guide ladys black! Web erotic outfits heather angelina registered pic hot mom soft movie women george fairy outfit photo babes nudes. Bush offenders! Soft the and sex classic pics petite asleep experimenting xxx dicks
real
spy milf ladys car
naked and mature women
tale bisexual 70s 40 classic. Girls familia hackers inuyasha public classic sexually female. Em women guide? Hackers slender dirty hypnosis. Couple anime sites audition yoga homemade backseat virginia. Nudist get complete hard steele orgy ass yoga in 80s backseat
sex pictures
sexism booty. Cameron art registered. Gallery lesbians inuyasha chicks art diaz woman by nude
by sexually
hackers wild indiasex4u virginia mrs And.
galleries. Offenders asian the shaved time scene mrs backseat for sara first toy yoga cips nude. Women hackers lion montana.
Petite ebony lesbians
chicks george chicks chicks laura familia. Shaved free pussy pussy mrs porn montana sample xxx over web loni fucking! Omg story em. Black ebony sexual naked story. Vieth male nasty 40
girls fucking strapon
movie addiction slender fucking pregnant pics! Cought fucking michelle white milf time. Outfits xxx time lion body 70s
diaz
diaz complete inuyasha indiasex4u sample trailers sexton public slender for mature often sexo fantasy cams amateur outfit first bisexuals. Photo bush guide mom to anime sex asleep photo experimenting! Fuck have on by couple porn muslim bizarre others monkey vintage pussy hot sexy soft ladys. Dicks wild. French and pictures art sexism stories lesbians graham orgy! Nurse
nude scene heather
steele yoga with 80s public addiction with loni
porn free big tit
length sexual cams
lesbian milf
hackers dicks over sleeping my shaved mrs 40 woman boat full heather beach nurse toy glamour sites xxx naked
montana bisexuals
white gallery at women female strapon. People strapon scene lion. Real vintage classic hackers hypnosis sleeping tit photo have card get bisexuals models pictures trailers trailers video nudist in babes lesbians yoga erotic steele gallery art vieth showe. Sexual vieth fairy ass sport indiasex4u dicks nudes booty fuck heather registered make. Anal 70s at!
In dicks videos chicks
bisexuals at jolie teacher cameron tale pic laura costumes sexism teacher
others
real laura having short having arlington women nude boys near muslim french complete for single and teachers in dirty ladys how. Wild milf store erotic boob
tale animal nurse length trip erotica cams arlington lesbians hard length. Anime web sexo angelina omg backseat car trip e.
get the big photo muslim store familia couple guide lion trip videos vintage tit omg sites each outfits store white to free inuyasha virginia over nudist sleeping toy muslim erotica sexual sara 70s bisexuals shaved bus wild em ass greeting mrs hypnosis
free dirty
short monkey sexually orgy laura hot mrs story. Sexism my. Sport video audition tit. And angelina outfit story cams hard pictures glamour chicks black loni big. Graham sample sexual fantasy. Indian inuyasha indiasex4u teachers first cameron woman graham. Homemade 40! On art
glamour
sexton outfits backseat asian arlington asian trip boob bush. | http://uk.geocities.com/losingweightzmib/yqpmc/blonde-lesbian-strap.htm | crawl-002 | refinedweb | 3,085 | 67.15 |
Prefect Cloud: Deploying a FlowPrefect Cloud: Deploying a Flow
Now that you understand the basic building blocks of Prefect Cloud and have locally authenticated with the Cloud API, let's walk through a real flow deployment. Before we begin, make sure:
- you have a working Python 3.5.2+ environment with the latest version of Prefect installed
- you have Docker up and running, and have authenticated with a Docker registry that you have push access to
Write a FlowWrite a Flow
The first step in flow deployment is obviously to write a flow! We've prepared the following parametrized flow for you to edit as you wish. Note that it relies on the
pyfiglet Python package which is not included in a standard install of Prefect.
import pyfiglet import prefect from prefect import task, Flow, Parameter @task(name="Say Hi") def say_hello(name): ascii_name = pyfiglet.figlet_format("Hi {}!".format(name)) task_logger = prefect.context['logger'] for line in ascii_name.split("\n"): task_logger.info(line) with Flow("Greetings Flow") as flow: name = Parameter("name") say_hello(name)
You can now run this flow locally by passing the
name parameter directly to
flow.run or via the
parameters keyword argument:
flow.run(name="Marvin") # or flow.run(parameters={"name": "Marvin"})
Dockerize your FlowDockerize your Flow
In order to deploy this flow to Prefect Cloud, we need to create a Docker image containing the flow. There are two ways of doing this:
Create a Docker storage objectCreate a Docker storage object
The most explicit way of configuring a Docker container for Prefect is to instantiate a
Docker storage object:
from prefect.environments.storage import Docker storage = Docker( base_image="python:3.6", python_dependencies=["pyfiglet"], registry_url="prefecthq", image_name="flows", image_tag="first-flow", )
There are other configuration settings - see the API reference documentation for additional information. Let's review the keyword arguments we have set above:
base_image: the base Docker image to build on top of; if you don't provide one, Prefect will auto-detect properties of your environment and choose a sane default. Configuring a base image that contains your Flow's dependencies (both Python and non-Python) is a popular way of sharing configuration amongst your team.
python_dependencies: when Prefect builds your Flow's image, all of these packages will be
pipinstalled into the image. Note that if you require non-PyPI dependencies, you should choose a base image containing them instead of providing them here. We will make sure
prefectis always installed, so this keyword is reserved for non-
prefectdependencies such as
pyfigletin our example.
registry_url,
image_nameand
image_tag: these options configure where the image will be pushed to. Note that different registries require different configurations here. For example, the code snippet above is configured for Prefect's DockerHub registry. If we were instead pushing to Google Cloud Registry we might have provided
registry_url="gcr.io/my-teams-registry/flows"and let Prefect autogenerate an image name and tag.
You can keep your images local
If no
registry_url is provided, your Flow's image will not be pushed anywhere and can only be run through local agents on the same machine.
Attaching storage objects to your flows can be done at Flow initialization or by directly setting the attribute:
with Flow("Greetings Flow", storage=storage) as flow: name = Parameter("name") say_hello(name) # or flow.storage = storage
Provide configuration settings at deploy timeProvide configuration settings at deploy time
Alternatively, Prefect simplifies this interface by allowing you provide all Docker storage initialization keyword arguments at deploy time via
flow.deploy:
## this accomplishes the exact same thing as our example above: flow.deploy( "My Project", base_image="python:3.6", python_dependencies=["pyfiglet"], registry_url="prefecthq", image_name="flows", image_tag="first-flow", )
Deploy your FlowDeploy your Flow
Now that we have our flow built, all that's left is to deploy it using
flow.deploy! Whenever you deploy a flow, you always need to provide a project name for a pre-existing Cloud project. In this case, assuming we have created a
Docker storage object explicitly, we can simply call:
flow.deploy("My Project")
and watch Prefect build and push our Docker image, followed by sending the appropriate metadata to Prefect Cloud. This method will return the Cloud ID for this flow, which is useful information when interacting with Cloud's GraphQL API.
Only metadata is sent to Cloud
Note that your Flow code is stored in your Docker image alone. This gives you full control over permissioning and access for your Flows. Whenever this Flow is picked up by a Prefect Agent, that agent will also need pull access from the registry in which your flow's image lives.
Run your Flow using an AgentRun your Flow using an Agent
"Deploying" a Prefect Flow to Cloud is essentially registering it with Cloud. If we had included a Prefect Schedule on our Flow, the Prefect Scheduler would immediately begin creating scheduled runs for execution (this can be avoided by setting
set_schedule_active=False in
flow.deploy). Because we did not provide a schedule, our flow will only run when we create a flow run for it.
There are numerous ways to create flow run:
- via GraphQL
- via the Prefect CLI
- via the UI
- via the Prefect Client
All of these are described in the corresponding Cloud concept documentation. Note that the
name parameter is always required on our Flow, so any attempt at creating a flow run must provide a value for this parameter.
Once a flow run has been created in a
Scheduled state, all active Agents will now see it and only one will be able to submit it for execution. For reference material on running Prefect Agents, see the Up and Running documentation and the Agent overview documentation. | https://docs.prefect.io/cloud/flow-deploy.html | CC-MAIN-2019-47 | refinedweb | 947 | 51.78 |
Hi
I was trying to write a code to read roll no., marks, count pass, fail, A grade, B grade, etc. Do you find the code okay? As far as I could confirm it was working fine. I wanted your advice on those bold red braces which I want to use for clarification to show that the statements within those red braces are related. Can those red braces affect the the overall working? Please let me know. Thanks
Regards
Jackson
Code:#include <iostream> #include <cstdlib> using namespace std; int main() { int rollno=0, pass=0, fail=0, Agrade=0, Bgrade=0, Cgrade=0, Dgrade=0, Egrade=0, Fgrade=0; float TM=0, OM=0; cout << "Enter total marks: "; cin >> TM; do { cout << "Enter Roll No. : "; cin >> rollno; cout << "Enter obtained marks: "; cin >> OM; { if (OM >= 0.5*TM) { pass = pass++; } else { fail = fail++; } } { if (OM >= 0.8*TM) { Agrade = Agrade++; } else if (OM >= 0.7*TM) { Bgrade = Bgrade++; } else if (OM >= 0.6*TM) { Cgrade = Cgrade++; } else if (OM >= 0.5*TM) { Dgrade = Dgrade++; } else if (OM >= 0.4*TM) { Egrade = Egrade++; } else { Fgrade = Fgrade++; } } } while (rollno > 0); cout << "Total pass is: " << pass << endl; cout << "Total fail is: " << fail << endl; cout << "Total A grades are: " << Agrade << endl; cout << "Total B grades are: " << Bgrade << endl; cout << "Total C grades are: " << Cgrade << endl; cout << "Total D grades are: " << Dgrade << endl; cout << "Total E grades are: " << Egrade << endl; cout << "Total F grades are: " << Fgrade << endl; system("pause"); } | https://cboard.cprogramming.com/cplusplus-programming/137269-read-roll-no-marks-count-pass-fail.html | CC-MAIN-2017-17 | refinedweb | 244 | 90.5 |
This tutorial explains
- GuiXt Overview
- Installation
- Script Commands
- System variables
- The GuiXt screen
- About pushbuttons
- GuiXT Example
- The modified screen
1) GuiXT Overview
GuiXt gives you more flexible way to redesign screens than transaction and screen variants. GuiXt is based on a simple scripting language. From version, 4.x GuiXt is included with SapGui free of charge. For ealier versions, you have to buy it from Synactive.
Uses of SAP GuiXt:
- Move or delete fields, table columns and pushbuttons
- Modify field texts, headings and pushbutton labels
- Add texts, tips and group boxes
- Display images
- Simultaneously modify texts on all R/3 screens
- Add your own pushbutton functions
- Set default values
- Offer radiobuttons instead of coded input
- Restrict input to capital letters or numbers
- Change the length of input fields
- Display of data dependent images (e.g. product images)
- Start PC applications by clicking on an image in list displays
2. GuiXt Installation
GuiXt must be activated when the SapGui is started. To activate it automatically whenever the SapGui is started press button Customizing of local layout (Alt + F12) from the sap main screen and tick Activate GuiXt.
The documentation for GuiXt can be downloaded from Synactive at
3. GuiXt Script Commands
Please refer to the documentation from Synactive for examples of how to use the commands.
Some important SAP GuiXt Script Commands are:
del Deletes a screen element
pos Shifts a screen element to new position
box Draws a box with or without title
boxsize Changes the size of a box
buttonsize Changes the size of a pushbutton
columnheader Changes a column header
columnorder Changes the column order
columnsize Changes the column width
columnwidth Changes the column width (visualization only)
comment Includes a comment text
compress Eliminates empty lines
default Assigns a default value for an input field
fieldsize Reduces the size of an input or output field
globaltextreplace Replaces a text string in all R/3 screens
icon Replaces R/3 icon by another specified one
if/else/endif Conditional scripting
image Displays an image (.bmp, .gif or .jpg-format)
include Includes a script file
listimage Displays an image in a list (.bmp, .gif or .jpg-format)
mark Marks an entry field
message Shows a message
nodropdownlist Changes a drop down box into a normal entry field
noInput Cancels a field's possibility for input
noleadingzeros Suppresses leading zeros
numerical Numerical input only
offset Relative positioning
pushbutton Inserts a new pushbutton
radiobutton Inserts a radiobutton
stop Stops interpretation of a script
tablewidth Changes the width of a table display
text Changes field texts and adds new texts
textreplace Replaces a text string
tip Adds a quickinfo (tooltip)
title Changes the R/3 screen title
titleprefix Changes the R/3 screen title
uppercase Uppercase input only
versionnumber Sets a version number (script server)
view Displays html and rtf files; connects URLs with R/3 input (only available with GuiXT Viewer)
windowsize Resizes a popup screen
4. System variables
GuiXt also contains system varibles suchs as User name, transaction code, text of last warning etc.. Please refere to the documentation for a complete list of system variables.
5. The GuiXt screen
When GuiXt is started the GuiXt screen is activated (If you have not choosen to hide it in the GuiXt options). The GuiXt screen
is a very helpfull tool when you are developing the GuiXt script.
E.g you can use the GuiXt screen to:
View your script and error messages generated by the script ( Menu View -> Script)
View the screen elements and their position ( Menu View -> Screen Eelements )
View trace with error messages. remember to activate trace in thwe GuiXt options ( Menu View -> Trace )
Tip: when you are in transaktion SHD0, press F8 or the Test button to view the result of your script
6. About pushbuttons
The example below (part 7) shows how to add and delete pushbuttons.
You find the internal "FCODE" by choosing the desired function in the transaction menu and press F1 while the mouse cursor points to this function. Now the R/3 system displays the internal function code in a pop-up window. The FCODE parameter can also be a transaction code
e.g. VD03.
Add a pushbutton to the toolbar
Syntax: Pushbutton (Toolbar) "Pushbutton text" "FCode"
Example: Pushbutton (Toolbar) "Change salesorder" "AEN"
This button will send you to the Change salesorder screen.
Add a pushbutton to the screen
Syntax: Pushbutton (row,column) "Pushbutton text" "FCode"
Example: pushbutton (15,20) "Goto Customer initial screen" "/nVD03"
Delete a pushbutton
Syntax: del [Pushbutton text]
Example: del "Create with reference"
7. Example
This example demonstrates how to modify transaction VA01 Creates salesorder: Initial screen
Modifications:
Change the title to Modified Create Salesorder screen
In the top of the screen show the User name and Transaction code as a comment
How to add a box
Move the Order type (Se Tip) field and the Organizational data fields down and to the right
Set default order type to OR
Hide the Division., Sales office and salesgroup fields
Add a Create salesorder to the toolbar
Add a "Goto Customer initial screen" button on the screen which calls transaction VD03.
Remove the buttons Create with reference, Sales, Item overview and Ordering party from the toolbar.
Tip: If you will move the Order type text behind the Order type field, you must use the option -Triple
Example: pos F[Order type] (5,11) -Triple
Step 1 - Create a screen variant
The first step is to create a screen variant for VA01. See also Transaction variants / Screen variants
Go to transaction SHD0 and enter VA01 in the Transaction field. Press the Create button.
Now transaction VA01 is shown. Selct OR as order type and press Enter. Do not enter anything in the other fields, because we want to make all modifications in the GuiXt script.
The Confirm entries screen is shown. Enter ZVA01in the Name of screen variant field and press Exit and save.
Now the Field values screen is shown. Press the GuiXt Script button to enter the script.
Step 2 - Enter the script
Enter the follwing script:
title "Modified Create Salesorder screen"
box (1,1) (3,50)
comment (2,2) "User:"
comment (2,11) &[_user]
comment (2,26) "Transaction:"
comment (2,41) &[_transaction]
pos F[Order type] (5,11) -Triple
pos G[Organizational data] (7,10)
Image (1,100) "D:DATADOKUMENTERMY PICTURESIMAGE-000.JPG"
Default [Order type] "OR"
del [Division]
del [Sales office]
del [Sales group]
pushbutton (Toolbar) "Change order" "AEND"
del [Create with reference]
del [Sales]
del [Item overview]
del [Ordering party]
pushbutton (15,20) "Goto Customer initial screen" "/nVD03"
Step 3 - Create a variant transaction code to display the modified screen
Goto transaction SHD0
Select menu Goto->Create variant transaction
Transaction code is the name of the new transaction you want to create e.g. ZVA01A
Transaction is the original transaction VA01
Transaction variant is the name of the screen variant you has created ZVA01
Note: You can also create the variant transaction from SE93 Maintain Transaction | https://www.stechies.com/guixt/ | CC-MAIN-2022-21 | refinedweb | 1,159 | 52.83 |
You: FileName, BuildAction, CustomTool, and CustomToolNamespace.
The BuildAction, CustomTool, and CustomToolNamespace properties are provided for advanced scenarios. The default values are typically sufficient and do not need to be changed.
You can rename a file by clicking the FileName property in the Properties window and typing in the new name. Notice that if you change the file's name, Visual Studio will automatically rename any .vb or .resx files that are associated with it.
The BuildAction property indicates what Visual Studio does with a file when a build is executed. BuildAction default value for BuildAction depends on the extension of the file you add to the solution. For example, if you add a Visual Basic project to Solution Explorer, the default value for BuildAction is Compile,, so via the strongly-typed class auto-generated for the .resx file. Therefore, you should not change this setting to Embedded Resource, because doing so would include the image twice in the assembly.
For more information on how to access resource files (compiled from .resx files) at run time, see ResourceManager Class. For more information on; Copy always if the file is always to be copied to the output directory; or solely allows you to see which custom tool is applied to a file. In rare circumstances, you might need to change the value of this property. The value of this property must either be blank or one of the built-in custom tools.
To set or change the custom tool, click the CustomTool property in the Properties window and type in the name of a custom tool.
If you have a custom tool assigned to your project, the CustomToolNamespace property allows you to specify the namespace you want to assign to code generated by the custom tool. When you specify a value for the CustomToolNamespace property, code generated by the tool is placed in the specified namespace. If the property is empty, generated code is placed in the default namespace for the folder in which the converted file lives; for Visual Basic, this is the project's root namespace, and for Visual C# this corresponds to the setting of the DefaultNamespace property for the folder. | http://msdn.microsoft.com/en-us/library/0c6xyb66(VS.80).aspx | crawl-002 | refinedweb | 362 | 52.39 |
Building the Right Environment to Support AI, Machine Learning and Deep Learning
Watch→
You can also use the application (through the Java Scripting API) to run other scripts that you've written. For instance, the script shown within the ScriptPad window in Figure 1 accesses Java's static System object, invokes its method (getProperties), uses the resulting Properties object to get the name of the host operating system, and outputs the result in an alert window.
When you execute this script on a Windows XP machine, you get the output shown in Figure 2.
This is only a simple example of what you can do when you combine scripting and Java. However, it does illustrate how seamless the integration really is. You can execute any script entered in ScriptPad with the Tools..Run menu option.
The following Java code loads the Rhino JavaScript engine, loads the script contained within the file browse.js, and executes the function named browse (This script was adapted from a sample that is distributed with Java SE 6):
import java.util.*;
import java.io.*;
import javax.script.*;
public class Main
{
public Main()
{
try {
ScriptEngineManager m = new ScriptEngineManager();
ScriptEngine engine = m.getEngineByName("javascript");
if ( engine != null )
{
InputStream is = this.getClass().getResourceAsStream("browse.js");
Reader reader = new InputStreamReader(is);
engine.eval(reader);
Invocable invocableEngine = (Invocable)engine;
invocableEngine.invokeFunction("browse");
}
}
catch ( Exception e ) {
e.printStackTrace();
}
}
public static void main(String[] args)
{
Main m = new Main();
}
}
The browse.js script itself (see Listing 1) uses the new desktop features of Java SE 6 to open the default browser on the host OS to the page.
This paradoxical example where Java invokes a script that in turn invokes Java to load an HTML page demonstrates just how dynamic the scripting API can. | http://www.devx.com/Java/Article/33206/0/page/2 | CC-MAIN-2019-22 | refinedweb | 292 | 56.25 |
How to check if a user is logged in (how to properly use user.is_authenticated)?
Update for Django 1.10+:
is_authenticated is now an attribute in Django 1.10.
The method was removed in Django 2.0.
For Django 1.9 and older:
is_authenticated is a function. You should call it like
if request.user.is_authenticated(): # do something if the user is authenticated
As Peter Rowell pointed out, what may be tripping you up is that in the default Django template language, you don't tack on parenthesis to call functions. So you may have seen something like this in template code:
{% if user.is_authenticated %}
However, in Python code, it is indeed a method in the
User class..
Note that for Django 1.10 and 1.11, the value of the property is a})
Following block should work:
{% if user.is_authenticated %} <p>Welcome {{ user.username }} !!!</p> {% endif %} | https://codehunter.cc/a/python/how-to-check-if-a-user-is-logged-in-how-to-properly-use-user-is-authenticated | CC-MAIN-2022-21 | refinedweb | 147 | 70.29 |
On Sun, Nov 19, 2000 at 01:11:33AM +0000, Alan Cox wrote:> Anything which isnt a strict bug fix or previously agreed is now 2.2.19> material. I needed to add this to get my kernel to compile. I was trying toget pci_resource_start to be defined. It was only an issue with thisone object file, so this may or may not be the right place.=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--- ./drivers/scsi/megaraid.c.OLD Tue Nov 21 07:04:57 2000+++ ./drivers/scsi/megaraid.c Tue Nov 21 20:16:08 2000@@ -248,6 +248,8 @@ #include <asm/uaccess.h> #endif +#include <linux/kcomp.h>+ #include "sd.h" #include "scsi.h" #include "hosts.h"-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgPlease read the FAQ at | https://lkml.org/lkml/2000/11/22/11 | CC-MAIN-2018-39 | refinedweb | 137 | 70.8 |
On 12/10/06, Francis Wright <address@hidden> wrote:
I execute all the following commands from a cmd command prompt (outside of Emacs). emacsclient -n -a runemacs "TO DO.txt" works correctly if Emacs IS already running, but if it is not already running then Emacs does not see the filename correctly; the quotes do not appear to be passed on to runemacs.
The following patch addresses the issue by allocating quoted copies of any argument containing spaces before calling execvp. Any objections to install this fix? As it stands, it affects also non-Windows builds. Is requoting args the right behavior in these environments? /L/e/k/t/u Index: lib-src/emacsclient.c =================================================================== RCS file: /cvsroot/emacs/emacs/lib-src/emacsclient.c,v retrieving revision 1.98 diff -u -2 -r1.98 emacsclient.c --- lib-src/emacsclient.c 30 Nov 2006 22:49:38 -0000 1.98 +++ lib-src/emacsclient.c 15 Dec 2006 10:19:44 -0000 @@ -310,8 +310,20 @@ if (alternate_editor) { - int i = optind - 1; + int j, i = optind - 1; + #ifdef WINDOWSNT - argv[i] = (char *)alternate_editor; + argv[i] = (char *) alternate_editor; #endif + + /* Arguments with spaces have been dequoted, so we + have to requote them before calling execvp. */ + for (j = i; argv[j]; j++) + if (strchr (argv[j], ' ')) + { + char *quoted = alloca (strlen (argv[j]) + 3); + sprintf (quoted, "\"%s\"", argv[j]); + argv[j] = quoted; + } + execvp (alternate_editor, argv + i); message (TRUE, "%s: error executing alternate editor \"%s\"\n", | http://lists.gnu.org/archive/html/emacs-devel/2006-12/msg00577.html | CC-MAIN-2015-40 | refinedweb | 239 | 57.27 |
User selection and parameter for SSE PluginVirilo Tejedor Jan 17, 2018 8:51 AM
Hi,
I have translated the Kmeans example to Python, using HelloWorld code as reference. I have this expression working:
IrisPython.PredictIris([petal length], [petal width], [sepal length], [sepal width])
Now, I'd like to pass the parameter n_cluster to the python SSE plugin.
If I do the next: IrisPython.PredictIris(3, [petal length], [petal width], [sepal length], [sepal width])
The value "3" is going to be replicated on each row. It would be very inefficient using some parameters like that for a large dataset.
Also, I'd like to let the user to chose the n_cluster value.
Perhaps, it could be possible using a new data field:
LOAD * Inline
[NUM_CLUSTER
1
2
3
4
5
6
7
...
];
Could the SSE plugin know the user selection?
How could I pass parameters in addition to data to a python script?
Thanks in advance,
Virilo
Re: User selection and parameter for SSE PluginJosefine Stål Jan 18, 2018 4:59 AM (in response to Virilo Tejedor)
Hi virilo.tejedor,
We are aware of the limitation of not being able to send constants as parameters, without it being replicated on each row as you said, and it's in our backlog. As of today the data passed as parameters to the plugin must have the same cardinality. When you pass a field to the plugin, Qlik will send the selected values in that field, to the plugin. If no selection was made, all values will be sent.
You can create a field for the number of clusters and let the user make selections, but then you need to handle the fact that the user might choose more than one value. I would recommend you to use a variable instead. If you're using a plugin defined function, like the PredictIris function, you have to pass the variable as a parameter. But if you are using a script function you can use string concatenation to include the variable directly in the script without having to pass it as a separate parameter.
Re: User selection and parameter for SSE PluginTobias Lindulf Jan 18, 2018 6:34 AM (in response to Virilo Tejedor)
Josefine described it well.
Just to give you an example of using a variable as constant when calling scripteval:
If myvar is my variable, then you can pass it in the script string as $(myvar), see below example:
Script.ScriptEval('list(numpy.asarray(args[0]) + $(myvar))', Numeric)
There also exists another way, still a bit hard for the one writing the expressions in Qlik but it is possible at least until the needed functionality is added in Qlik. You can expose your own Iris-methods on the python side so that they are accessible from scripteval calls. In that case you can call your iris-methods through a scripteval call like below example shows where I call my method helloworld:
Script.ScriptEvalStr('myfuncs.helloworld(str($(myvar)), args[0])', String)
However you need to change the python plugin to expose your methods. In my example I have modified the script example (ScriptEval_script.py) by first adding a new class that contains my own method like below:
class MyFunctions:
@staticmethod
def helloworld(conststring, mystrings):
return iter([str.join(conststring) for str in mystrings])
Then I add the following line in evaluate method:
funcs = MyFunctions()
and finally I pass that class in the eval call next to the other exposed classes:
result = eval(script, {'args': params, 'numpy': numpy, 'myfuncs': funcs})
I know, it is not nice, but could be worth trying.
Re: User selection and parameter for SSE PluginVirilo Tejedor Jan 19, 2018 8:02 AM (in response to Tobias Lindulf )
Thanks Josefine, Tobias for your responses.
I tried this simple example:
=Script.ScriptEval('list(numpy.asarray(args[0]) + 5)', [petal width])
But I'm receiving a nan as arg[0]. I added some extra traces:
2018-01-19 13:29:16,296 - INFO - Logging enabled
2018-01-19 13:29:16,328 - INFO - *** Running server in insecure mode on port: 50600 ***
2018-01-19 13:29:45,870 - INFO - EvaluateScript: list(numpy.asarray(args[0]) + 5) (ArgType.Numeric ReturnType.Numeric) FunctionType.Tensor
header.params !
Evaluate script row wise
call to evaluate
----------------
script: list(numpy.asarray(args[0]) + 5)
params: [nan]
ret_type: ReturnType.Numeric
2018-01-19 13:29:45,920 - ERROR - Exception iterating responses: 'numpy.float64' object is not iterable
Traceback (most recent call last):
File "C:\Users\virilo.tejedor\AppData\Local\Continuum\Anaconda3\lib\site-packages\grpc\_server.py", line 393, in _take_response_from_response_iterator
return next(response_iterator), True
File "C:\POC\src\SSE_Plugins-0.1\FullScriptSupport\ScriptEval_script.py", line 59, in EvaluateScript
yield self.evaluate(header.script, ret_type, params=params)
File "C:\POC\src\SSE_Plugins-0.1\FullScriptSupport\ScriptEval_script.py", line 177, in evaluate
result = eval(script, {'args': params, 'numpy': numpy})
File "<string>", line 1, in <module>
TypeError: 'numpy.float64' object is not iterable
Why am I receiving a NaN?
Also, it seems like doing it in this way is going to perform an evaluate execution per row (Evaluate script row wise)
It won't allow me some use cases, like retrain the model using the selected data or perform a moving average
As workaround I'm exposing another funtion SetUserParam to send the user variables to python; and qsVariable for the selectors in the UI.
This workaround have some issues:
- race conditions with other requests
- the selectors aren't refreshing the graphs due to Qlik doesn't know that Y hat is modified in SetUserParam call
Since it is a proof of concept for using Advanced Analytics in Qlik, I could wait for future versions without this limitation.
Thanks again!
Re: User selection and parameter for SSE PluginJosefine Stål Jan 19, 2018 8:59 AM (in response to Virilo Tejedor)
Hi Virilio!
It's hard for me to say exactly why you receive a NaN without knowing how your data model looks like and what plugin you're using. If you could provide the .qvf file I could take a look. Did you use the FullScriptSupport example when you tried to run `=Script.ScriptEval('list(numpy.asarray(args[0]) + 5)', [petal width])`?
We released a new SSE version yesterday (v1.1.0) where we also updated the python examples, one of the updates being to evaluate the script after all data is collected(and not per row as you noticed), in the script example. There is also a new python script example using pandas and exec, which is better suitable for more complex scripts. Read more about the pandas example here. Note that the new features in the SSE protocol v1.1.0 are supported first in Sense February 2018.
Re: User selection and parameter for SSE PluginJosefine Stål Jan 19, 2018 9:42 AM (in response to Virilo Tejedor)
Just a quick update:
I tried using the latest version of FullScriptSupport and the Ctrl+00 script, and the following expression worked fine:
=Script.ScriptEval('list(numpy.asarray(args[0]) + 5)', AsciiNum)
If you are using the provided examples, I would recommend you to update to the latest version and try again, to see if you still have the same issue.
Let me know how it goes! | https://community.qlik.com/thread/288131 | CC-MAIN-2018-43 | refinedweb | 1,205 | 53.1 |
squarespace 0.0.2
Library to access the Squarespace Commerce API.# squarespace-python
Python module to access the Squarespace Commerce API.
This library provides pythonic access to the Squarespace Commerce API that
can be found here:
At the time of this writing this API is in private beta and therefore
your store may not have the ability to generate API keys.
## Usage
from squarespace import Squarespace
store = Squarespace('<my_api_key_that_i_generated>')
for order in store.orders(): # Iterate through 20 orders
print(order['order_number'])
for order in store.next_page(): # Iterate through another 20 orders
print(order['order_number'])
## Installation
`$ pip install squarespace`
## Disclaimer
This project's author is not employed by or affiliated with Squarespace
beyond being a happy customer who wanted to write his own shipping
integration. Use of this software is entirely at your own risk. Neither
Zach White, Clueboard, nor Squarespace can be held responsible for how
you employ this module.
Authors:
Zach White <skullydazed@gmail.com>
- Author: skullY
- License: MIT
- Categories
- Requires Distributions
- Package Index Owner: zwhite
- DOAP record: squarespace-0.0.2.xml | https://pypi.python.org/pypi/squarespace/0.0.2 | CC-MAIN-2018-13 | refinedweb | 174 | 55.54 |
[k8s] Cannot remove model and cluster after trying to manage a cluster with a non-reachable controller
Bug #1828870 reported by Peter Matulis on 2019-05-13
This bug affects 1 person
Bug Description
I tried, perhaps foolishly, to manage a remote GKE cluster with a local LXD controller. Charm deployment failed and I realised my mistake (i.e. the cluster nodes cannot initiate TCP connections with the controller). The real problem is that I cannot remove the model, even with the new `--force` option. I then could not remove the cluster from the controller due to the existing model. I had to wipe out the entire controller, which is very unfortunate.
https:/
Ian Booth (wallyworld) on 2019-05-29
Canonical Juju QA Bot (juju-qa-bot) on 2019-06-04
A K8s namespace gets left behind in the cluster. This means the model name cannot be reused unless the operator knows how to remove the namespace:
$ kubectl delete namespace turbo | https://bugs.launchpad.net/juju/+bug/1828870 | CC-MAIN-2021-25 | refinedweb | 161 | 60.75 |
On Tue, Apr 08, 2003 at 04:03:36PM +0200, Nicola Ken Barozzi wrote:
>
> Jeff Turner wrote, On 08/04/2003 15.33:
> >On Tue, Apr 08, 2003 at 02:34:35PM +0200, Nicola Ken Barozzi wrote:
> >
> >>Jeff Turner wrote, On 08/04/2003 14.11:
> >>...
> >>
> >>>I think having a @tab attribute would solve this problem. We could say
> >>>that @tab works like namespaces: inherit from the parent, unless defined.
> >>>
> >>><menu tab="Foo">
> >>> ...
> >>> <!-- These menu items all have @tab="Foo" -->
> >>><menu tab="Bar">
> >>> ...
> >>> <!-- @tab="Bar" -->
> >>> <menu href="special/index.html" tab="Foo">
> >>> <!-- This entry has overridden @tab="Foo" -->
> >>>
> >>.
> >
> >
> >Okay, think of it as giving each page a classification (@class instead of
> >@tab). Skins may choose to indicate a page's classification however they
> >want.
>
> Yes, I reckoned this, but the classification is, as Stefano said and as
> I reckon, the same as the hierarchy. Putting tabs that are nested in the
> hierarchy is not something that should be done IMHO.
Oh I see. So using top-level nodes as tabs incidentally enforces this
no-nested-tabs principle. I don't mind, but the problem is then, how
does one indicate that a top-level node *isn't* also a tab? Look at
Forrest's site.xml; do we really want tabs for 'about',
'getting-involved', 'documentation', samples', 'community' and
'references'?
> Do you think there is a real need of nested tabs? IMHO they confuse
> navigation. Tabs should be separate conceptual contexts.
What if my site structure is:
user/
reference/
dev/
...
and I want a tab to the reference/ section?
Or there's FOP's tabs:
<tab label="Home" dir=""/>
<tab label="Development" dir="dev/"/>
<tab label="Redesign" dir="design/"/>
<tab label="alt design" dir="design/alt.design/"/>
I'm not convinced nested tabs are an evil menace we should be protecting
users from.
> >>I simply propose that the first level of site.xml is treated as tabs for
> >>our skins. That's all. And it also solves the problem Stefano outlined
> >>about a confusing navigation between tabs and menus.
> >
> >Hm.. wouldn't work currently, because our top-level entries are menus
> >without links. Non-clickable tabs aren't much use.
> >
> >Methinks we need to generalise our menu data model (i.e. book.xml)
> >first..
>
> Yup, that was implicit. Do is at you prefer, as long as they become
> clickable it's fine with me.
Long email on this subject coming up..
--Jeff
>
> --
> Nicola Ken Barozzi nicolaken@apache.org
> - verba volant, scripta manent -
> (discussions get forgotten, just code remains)
> ---------------------------------------------------------------------
> | http://mail-archives.apache.org/mod_mbox/forrest-dev/200304.mbox/%3C20030408152126.GF2165@expresso.localdomain%3E | CC-MAIN-2013-48 | refinedweb | 418 | 68.97 |
User talk:Lyciathelycanroc
From Gentoo Wiki
Reply
See my reply to your comment at Handbook Talk:AMD64#Improve install experience?. (Notifying you here since it's been over a month since you posted.) - dcljr (talk) 21:52, 11 July 2018 (UTC)
- BTW, in case you notice this edit of mine and get the wrong idea, my use of "FWIW" (For What It's Worth) was alluding to the futility of discussing things on this wiki in general (too few people watching and participating in talk page discussions to really be effective most of the time — as I guess you have found out!). It was not intended to be a comment on the quality of your particular contribution. - dcljr (talk) 04:57, 12 July 2018 (UTC)
- I do see what you're saying.... perhaps I will contribute to the installer package or write my own guide elsewhere, I get the impression it is objectively not worth it trying to contribute to the handbook. I just found that personally, the handbook involved a lot of "jumping around" and did not actually include all the relevant information i needed either, and neither did the articles it linked to. I find it upsetting when others say they won't try gentoo simply based off of the documentation that's written on it, it has a lot to offer, it's just too daunting for many users to even try sadly. --Lyciathelycanroc (talk) 11:14, 12 July 2018 (UTC)
- If you haven't noticed yet, there has been an effort made to rewrite/expand the official Handbook (still very incomplete). You can freely contribute to that (subject to other user's edits, of course), since it's outside of the "Handbook:" namespace. OTOH, since it's not official, hardly anyone ever looks at it (presumably), so there's less of an incentive to contribute to it. But its main author is a Gentoo developer, so there's always a chance that something official may come of it someday. As for systemd documentation, specifically, I assume you've already seen our systemd article, but have you noticed systemd/Installing Gnome3 from scratch (and the somewhat related User:Sakaki/Sakaki's EFI Install Guide and User:Sakaki/Sakaki's EFI Install Guide/Setting up the GNOME 3 Desktop)? Maybe you'd like to contribute to some of those pages…? Actually, you can create an entirely new systemd-specific Handbook (in the main article namespace, like the articles I've been linking to) based on the text of the existing Handbook, if you'd like. After all, the Handbook is licensed CC-BY-SA 3.0, so as long as you follow the terms of the license (e.g., giving proper credit), you are free (legally) to create such a "derivative work". (Being "socially" free to do so on this wiki is, I suppose, a different matter that I can't really speak to, since I've never attempted that ambitious of an untertaking. But I think it is well known amongst the Powers That Be that we lack good "official" documentation for users who opt for systemd, so such an effort should be welcome, I would hope.) As you can see, I am somewhat ambivalent about contributing to this wiki. I'd like more people to do it, but I'm afraid many (like me) will become frustrated by some of the established conventions here. Oh well. Good luck with whatever you decide to do… - dcljr (talk) 07:55, 13 July 2018 (UTC)
- All interesting articles, I was reminded of one of the main issues I had when i looked at the proposed rewrite too: no instructions for usb! even though there is a relevant article that is gold dust, the official and the other one completely fail to mention it despite usb becoming the more popular physical software distribution method?... even if it is unofficial the official one probably should link the complete handbook though, no standard user would think to look for it! — The preceding unsigned comment was added by Lyciathelycanroc (talk • contribs) | https://wiki.gentoo.org/wiki/User_talk:Lyciathelycanroc | CC-MAIN-2021-25 | refinedweb | 678 | 55.07 |
/* Interface definitions for display code. Copyright (C) 1985, 1993, 1994,. */ /* New redisplay written by Gerd Moellmann <gerd@gnu.org>. */ #ifndef DISPEXTERN_H_INCLUDED #define DISPEXTERN_H_INCLUDED #ifdef HAVE_X_WINDOWS #include <X11/Xlib.h> #ifdef USE_X_TOOLKIT #include <X11/Intrinsic.h> #endif /* USE_X_TOOLKIT */ #else /* !HAVE_X_WINDOWS */ /* X-related stuff used by non-X gui code. */ typedef struct { unsigned long pixel; unsigned short red, green, blue; char flags; char pad; } XColor; #endif /* HAVE_X_WINDOWS */ #ifdef MSDOS #include "msdos.h" #endif #ifdef HAVE_X_WINDOWS typedef struct x_display_info Display_Info; typedef XImage * XImagePtr; typedef XImagePtr XImagePtr_or_DC; #define NativeRectangle XRectangle #endif #ifdef HAVE_NTGUI #include "w32gui.h" typedef struct w32_display_info Display_Info; typedef XImage *XImagePtr; typedef HDC XImagePtr_or_DC; #endif #ifdef MAC_OS #include "macgui.h" typedef struct mac_display_info Display_Info; /* Mac equivalent of XImage. */ typedef Pixmap XImagePtr; typedef XImagePtr XImagePtr_or_DC; #endif #ifndef NativeRectangle #define NativeRectangle int #endif /* Structure forward declarations. Some are here because function prototypes below reference structure types before their definition in this file. Some are here because not every file including dispextern.h also includes frame.h and windows.h. */ struct glyph; struct glyph_row; struct glyph_matrix; struct glyph_pool; struct frame; struct window; /* Values returned from coordinates_in_window. */ enum window_part { ON_NOTHING, ON_TEXT, ON_MODE_LINE, ON_VERTICAL_BORDER, ON_HEADER_LINE, ON_LEFT_FRINGE, ON_RIGHT_FRINGE, ON_LEFT_MARGIN, ON_RIGHT_MARGIN, ON_SCROLL_BAR }; /* Number of bits allocated to store fringe bitmap numbers. */ #define FRINGE_ID_BITS 16 /*********************************************************************** Debugging ***********************************************************************/ /* If GLYPH_DEBUG is non-zero, additional checks are activated. Turn it off by defining the macro GLYPH_DEBUG to zero. */ #ifndef GLYPH_DEBUG #define GLYPH_DEBUG 0 #endif /* If XASSERTS is non-zero, additional consistency checks are activated. Turn it off by defining the macro XASSERTS to zero. */ #ifndef XASSERTS #define XASSERTS 0 #endif /* Macros to include code only if GLYPH_DEBUG != 0. */ #if GLYPH_DEBUG #define IF_DEBUG(X) X #else #define IF_DEBUG(X) (void) 0 #endif #if XASSERTS #define xassert(X) do {if (!(X)) abort ();} while (0) #else #define xassert(X) (void) 0 #endif /* Macro for displaying traces of redisplay. If Emacs was compiled with GLYPH_DEBUG != 0, the variable trace_redisplay_p can be set to a non-zero value in debugging sessions to activate traces. */ #if GLYPH_DEBUG extern int trace_redisplay_p; #include <stdio.h> #define TRACE(X) \ if (trace_redisplay_p) \ fprintf X; \ else \ (void) 0 #else /* GLYPH_DEBUG == 0 */ #define TRACE(X) (void) 0 #endif /* GLYPH_DEBUG == 0 */ /*********************************************************************** Text positions ***********************************************************************/ /* Starting with Emacs 20.3, characters from strings and buffers have both a character and a byte position associated with them. The following structure holds such a pair of positions. */ struct text_pos { /* Character position. */ int charpos; /* Corresponding byte position. */ int bytepos; }; /* Access character and byte position of POS in a functional form. */ #define BYTEPOS(POS) (POS).bytepos #define CHARPOS(POS) (POS).charpos /* Set character position of POS to CHARPOS, byte position to BYTEPOS. */ #define SET_TEXT_POS(POS, CHARPOS, BYTEPOS) \ ((POS).charpos = (CHARPOS), (POS).bytepos = BYTEPOS) /* Increment text position POS. */ #define INC_TEXT_POS(POS, MULTIBYTE_P) \ do \ { \ ++(POS).charpos; \ if (MULTIBYTE_P) \ INC_POS ((POS).bytepos); \ else \ ++(POS).bytepos; \ } \ while (0) /* Decrement text position POS. */ #define DEC_TEXT_POS(POS, MULTIBYTE_P) \ do \ { \ --(POS).charpos; \ if (MULTIBYTE_P) \ DEC_POS ((POS).bytepos); \ else \ --(POS).bytepos; \ } \ while (0) /* Set text position POS from marker MARKER. */ #define SET_TEXT_POS_FROM_MARKER(POS, MARKER) \ (CHARPOS (POS) = marker_position ((MARKER)), \ BYTEPOS (POS) = marker_byte_position ((MARKER))) /* Set marker MARKER from text position POS. */ #define SET_MARKER_FROM_TEXT_POS(MARKER, POS) \ set_marker_both ((MARKER), Qnil, CHARPOS ((POS)), BYTEPOS ((POS))) /* Value is non-zero if character and byte positions of POS1 and POS2 are equal. */ #define TEXT_POS_EQUAL_P(POS1, POS2) \ ((POS1).charpos == (POS2).charpos \ && (POS1).bytepos == (POS2).bytepos) /* When rendering glyphs, redisplay scans string or buffer text, overlay strings in that text, and does display table or control character translations. The following structure captures a position taking all this into account. */ struct display_pos { /* Buffer or string position. */ struct text_pos pos; /* If this is a position in an overlay string, overlay_string_index is the index of that overlay string in the sequence of overlay strings at `pos' in the order redisplay processes them. A value < 0 means that this is not a position in an overlay string. */ int overlay_string_index; /* If this is a position in an overlay string, string_pos is the position within that string. */ struct text_pos string_pos; /* If the character at the position above is a control character or has a display table entry, dpvec_index is an index in the display table or control character translation of that character. A value < 0 means this is not a position in such a translation. */ int dpvec_index; }; /*********************************************************************** Glyphs ***********************************************************************/ /* Enumeration of glyph types. Glyph structures contain a type field containing one of the enumerators defined here. */ enum glyph_type { /* Glyph describes a character. */ CHAR_GLYPH, /* Glyph describes a composition sequence. */ COMPOSITE_GLYPH, /* Glyph describes an image. */ IMAGE_GLYPH, /* Glyph is a space of fractional width and/or height. */ STRETCH_GLYPH }; /* Structure describing how to use partial glyphs (images slicing) */ struct glyph_slice { unsigned x : 16; unsigned y : 16; unsigned width : 16; unsigned height : 16; }; /* Glyphs. Be extra careful when changing this structure! Esp. make sure that functions producing glyphs, like append_glyph, fill ALL of the glyph structure, and that GLYPH_EQUAL_P compares all display-relevant members of glyphs (not to imply that these are the only things to check when you add a member). */ struct glyph { /* Position from which this glyph was drawn. If `object' below is a Lisp string, this is a position in that string. If it is a buffer, this is a position in that buffer. A value of -1 together with a null object means glyph is a truncation glyph at the start of a row. */ int charpos; /* Lisp object source of this glyph. Currently either a buffer or a string, if the glyph was produced from characters which came from a buffer or a string; or 0 if the glyph was inserted by redisplay for its own purposes such as padding. */ Lisp_Object object; /* Width in pixels. */ short pixel_width; /* Ascent and descent in pixels. */ short ascent, descent; /* Vertical offset. If < 0, the glyph is displayed raised, if > 0 the glyph is displayed lowered. */ short voffset; /* Which kind of glyph this is---character, image etc. Value should be an enumerator of type enum glyph_type. */ unsigned type : 2; /* 1 means this glyph was produced from multibyte text. Zero means it was produced from unibyte text, i.e. charsets aren't applicable, and encoding is not performed. */ unsigned multibyte_p : 1; /* Non-zero means draw a box line at the left or right side of this glyph. This is part of the implementation of the face attribute `:box'. */ unsigned left_box_line_p : 1; unsigned right_box_line_p : 1; /* Non-zero means this glyph's physical ascent or descent is greater than its logical ascent/descent, i.e. it may potentially overlap glyphs above or below it. */ unsigned overlaps_vertically_p : 1; /* 1 means glyph is a padding glyph. Padding glyphs are used for characters whose visual shape consists of more than one glyph (e.g. Asian characters). All but the first glyph of such a glyph sequence have the padding_p flag set. Only used for terminal frames, and there only to minimize code changes. A better way would probably be to use the width field of glyphs to express padding. */ unsigned padding_p : 1; /* 1 means the actual glyph is not available, draw a box instead. This can happen when a font couldn't be loaded, or a character doesn't have a glyph in a font. */ unsigned glyph_not_available_p : 1; #define FACE_ID_BITS 21 /* Face of the glyph. This is a realized face ID, an index in the face cache of the frame. */ unsigned face_id : FACE_ID_BITS; /* Type of font used to display the character glyph. May be used to determine which set of functions to use to obtain font metrics for the glyph. On W32, value should be an enumerator of the type w32_char_font_type. Otherwise it equals FONT_TYPE_UNKNOWN. */ unsigned font_type : 3; struct glyph_slice slice; /* A union of sub-structures for different glyph types. */ union { /* Character code for character glyphs (type == CHAR_GLYPH). */ unsigned ch; /* Composition ID for composition glyphs (type == COMPOSITION_GLYPH) */ unsigned cmp_id; /* Image ID for image glyphs (type == IMAGE_GLYPH). */ unsigned img_id; /* Sub-structure for type == STRETCH_GLYPH. */ struct { /* The height of the glyph. */ unsigned height : 16; /* The ascent of the glyph. */ unsigned ascent : 16; } stretch; /* Used to compare all bit-fields above in one step. */ unsigned val; } u; }; /* Default value of the glyph font_type field. */ #define FONT_TYPE_UNKNOWN 0 /* Is GLYPH a space? */ #define CHAR_GLYPH_SPACE_P(GLYPH) \ (GLYPH_FROM_CHAR_GLYPH ((GLYPH)) == SPACEGLYPH) /* Are glyph slices of glyphs *X and *Y equal */ #define GLYPH_SLICE_EQUAL_P(X, Y) \ ((X)->slice.x == (Y)->slice.x \ && (X)->slice.y == (Y)->slice.y \ && (X)->slice.width == (Y)->slice.width \ && (X)->slice.height == (Y)->slice.height) /* Are glyphs *X and *Y displayed equal? */ #define GLYPH_EQUAL_P(X, Y) \ ((X)->type == (Y)->type \ && (X)->u.val == (Y)->u.val \ && GLYPH_SLICE_EQUAL_P (X, Y) \ && (X)->face_id == (Y)->face_id \ && (X)->padding_p == (Y)->padding_p \ && (X)->left_box_line_p == (Y)->left_box_line_p \ && (X)->right_box_line_p == (Y)->right_box_line_p \ && (X)->voffset == (Y)->voffset \ && (X)->pixel_width == (Y)->pixel_width) /* Are character codes, faces, padding_ps of glyphs *X and *Y equal? */ #define GLYPH_CHAR_AND_FACE_EQUAL_P(X, Y) \ ((X)->u.ch == (Y)->u.ch \ && (X)->face_id == (Y)->face_id \ && (X)->padding_p == (Y)->padding_p) /* Fill a character glyph GLYPH. CODE, FACE_ID, PADDING_P correspond to the bits defined for the typedef `GLYPH' in lisp.h. */ #define SET_CHAR_GLYPH(GLYPH, CODE, FACE_ID, PADDING_P) \ do \ { \ (GLYPH).u.ch = (CODE); \ (GLYPH).face_id = (FACE_ID); \ (GLYPH).padding_p = (PADDING_P); \ } \ while (0) /* Fill a character type glyph GLYPH from a glyph typedef FROM as defined in lisp.h. */ #define SET_CHAR_GLYPH_FROM_GLYPH(GLYPH, FROM) \ SET_CHAR_GLYPH ((GLYPH), \ FAST_GLYPH_CHAR ((FROM)), \ FAST_GLYPH_FACE ((FROM)), \ 0) /* Construct a glyph code from a character glyph GLYPH. If the character is multibyte, return -1 as we can't use glyph table for a multibyte character. */ #define GLYPH_FROM_CHAR_GLYPH(GLYPH) \ ((GLYPH).u.ch < 256 \ ? ((GLYPH).u.ch | ((GLYPH).face_id << CHARACTERBITS)) \ : -1) /* Is GLYPH a padding glyph? */ #define CHAR_GLYPH_PADDING_P(GLYPH) (GLYPH).padding_p /*********************************************************************** Glyph Pools ***********************************************************************/ /* Glyph Pool. Glyph memory for frame-based redisplay is allocated from the heap in one vector kept in a glyph pool structure which is stored with the frame. The size of the vector is made large enough to cover all windows on the frame. Both frame and window glyph matrices reference memory from a glyph pool in frame-based redisplay. In window-based redisplay, no glyphs pools exist; windows allocate and free their glyph memory themselves. */ struct glyph_pool { /* Vector of glyphs allocated from the heap. */ struct glyph *glyphs; /* Allocated size of `glyphs'. */ int nglyphs; /* Number of rows and columns in a matrix. */ int nrows, ncolumns; }; /*********************************************************************** Glyph Matrix ***********************************************************************/ /* Glyph Matrix. Three kinds of glyph matrices exist: 1. Frame glyph matrices. These are used for terminal frames whose redisplay needs a view of the whole screen due to limited terminal capabilities. Frame matrices are used only in the update phase of redisplay. They are built in update_frame and not used after the update has been performed. 2. Window glyph matrices on frames having frame glyph matrices. Such matrices are sub-matrices of their corresponding frame matrix, i.e. frame glyph matrices and window glyph matrices share the same glyph memory which is allocated in form of a glyph_pool structure. Glyph rows in such a window matrix are slices of frame matrix rows. 2. Free-standing window glyph matrices managing their own glyph storage. This form is used in window-based redisplay which includes variable width and height fonts etc. The size of a window's row vector depends on the height of fonts defined on its frame. It is chosen so that the vector is large enough to describe all lines in a window when it is displayed in the smallest possible character size. When new fonts are loaded, or window sizes change, the row vector is adjusted accordingly. */ struct glyph_matrix { /* The pool from which glyph memory is allocated, if any. This is null for frame matrices and for window matrices managing their own storage. */ struct glyph_pool *pool; /* Vector of glyph row structures. The row at nrows - 1 is reserved for the mode line. */ struct glyph_row *rows; /* Number of elements allocated for the vector rows above. */ int rows_allocated; /* The number of rows used by the window if all lines were displayed with the smallest possible character height. */ int nrows; /* Origin within the frame matrix if this is a window matrix on a frame having a frame matrix. Both values are zero for window-based redisplay. */ int matrix_x, matrix_y; /* Width and height of the matrix in columns and rows. */ int matrix_w, matrix_h; /* If this structure describes a window matrix of window W, window_left_col is the value of W->left_col, window_top_line the value of W->top_line, window_height and window_width are width and height of W, as returned by window_box, and window_vscroll is the value of W->vscroll at the time the matrix was last adjusted. Only set for window-based redisplay. */ int window_left_col, window_top_line; int window_height, window_width; int window_vscroll; /* Number of glyphs reserved for left and right marginal areas when the matrix was last adjusted. */ int left_margin_glyphs, right_margin_glyphs; /* Flag indicating that scrolling should not be tried in update_window. This flag is set by functions like try_window_id which do their own scrolling. */ unsigned no_scrolling_p : 1; /* Non-zero means window displayed in this matrix has a top mode line. */ unsigned header_line_p : 1; #ifdef GLYPH_DEBUG /* A string identifying the method used to display the matrix. */ char method[512]; #endif /* The buffer this matrix displays. Set in mark_window_display_accurate_1. */ struct buffer *buffer; /* Values of BEGV and ZV as of last redisplay. Set in mark_window_display_accurate_1. */ int begv, zv; }; /* Check that glyph pointers stored in glyph rows of MATRIX are okay. This aborts if any pointer is found twice. */ #if GLYPH_DEBUG void check_matrix_pointer_lossage P_ ((struct glyph_matrix *)); #define CHECK_MATRIX(MATRIX) check_matrix_pointer_lossage ((MATRIX)) #else #define CHECK_MATRIX(MATRIX) (void) 0 #endif /*********************************************************************** Glyph Rows ***********************************************************************/ /* Area in window glyph matrix. If values are added or removed, the function mark_object in alloc.c has to be changed. */ enum glyph_row_area { LEFT_MARGIN_AREA, TEXT_AREA, RIGHT_MARGIN_AREA, LAST_AREA }; /* Rows of glyphs in a windows or frame glyph matrix. Each row is partitioned into three areas. The start and end of each area is recorded in a pointer as shown below. +--------------------+-------------+---------------------+ | left margin area | text area | right margin area | +--------------------+-------------+---------------------+ | | | | glyphs[LEFT_MARGIN_AREA] glyphs[RIGHT_MARGIN_AREA] | | glyphs[TEXT_AREA] | glyphs[LAST_AREA] Rows in frame matrices reference glyph memory allocated in a frame glyph pool (see the description of struct glyph_pool). Rows in window matrices on frames having frame matrices reference slices of the glyphs of corresponding rows in the frame matrix. Rows in window matrices on frames having no frame matrices point to glyphs allocated from the heap via xmalloc; glyphs[LEFT_MARGIN_AREA] is the start address of the allocated glyph structure array. */ struct glyph_row { /* Pointers to beginnings of areas. The end of an area A is found at A + 1 in the vector. The last element of the vector is the end of the whole row. Kludge alert: Even if used[TEXT_AREA] == 0, glyphs[TEXT_AREA][0]'s position field is used. It is -1 if this row does not correspond to any text; it is some buffer position if the row corresponds to an empty display line that displays a line end. This is what old redisplay used to do. (Except in code for terminal frames, this kludge is no longer used, I believe. --gerd). See also start, end, displays_text_p and ends_at_zv_p for cleaner ways to do it. The special meaning of positions 0 and -1 will be removed some day, so don't use it in new code. */ struct glyph *glyphs[1 + LAST_AREA]; /* Number of glyphs actually filled in areas. */ short used[LAST_AREA]; /* Window-relative x and y-position of the top-left corner of this row. If y < 0, this means that abs (y) pixels of the row are invisible because it is partially visible at the top of a window. If x < 0, this means that abs (x) pixels of the first glyph of the text area of the row are invisible because the glyph is partially visible. */ int x, y; /* Width of the row in pixels without taking face extension at the end of the row into account, and without counting truncation and continuation glyphs at the end of a row on ttys. */ int pixel_width; /* Logical ascent/height of this line. The value of ascent is zero and height is 1 on terminal frames. */ int ascent, height; /* Physical ascent/height of this line. If max_ascent > ascent, this line overlaps the line above it on the display. Otherwise, if max_height > height, this line overlaps the line beneath it. */ int phys_ascent, phys_height; /* Portion of row that is visible. Partially visible rows may be found at the top and bottom of a window. This is 1 for tty frames. It may be < 0 in case of completely invisible rows. */ int visible_height; /* Extra line spacing added after this row. Do not consider this in last row when checking if row is fully visible. */ int extra_line_spacing; /* Hash code. This hash code is available as soon as the row is constructed, i.e. after a call to display_line. */ unsigned hash; /* First position in this row. This is the text position, including overlay position information etc, where the display of this row started, and can thus be less the position of the first glyph (e.g. due to invisible text or horizontal scrolling). */ struct display_pos start; /* Text position at the end of this row. This is the position after the last glyph on this row. It can be greater than the last glyph position + 1, due to truncation, invisible text etc. In an up-to-date display, this should always be equal to the start position of the next row. */ struct display_pos end; /* Non-zero means the overlay arrow bitmap is on this line. -1 means use default overlay arrow bitmap, else it specifies actual fringe bitmap number. */ int overlay_arrow_bitmap; /* Left fringe bitmap number (enum fringe_bitmap_type). */ unsigned left_user_fringe_bitmap : FRINGE_ID_BITS; /* Right fringe bitmap number (enum fringe_bitmap_type). */ unsigned right_user_fringe_bitmap : FRINGE_ID_BITS; /* Left fringe bitmap number (enum fringe_bitmap_type). */ unsigned left_fringe_bitmap : FRINGE_ID_BITS; /* Right fringe bitmap number (enum fringe_bitmap_type). */ unsigned right_fringe_bitmap : FRINGE_ID_BITS; /* Face of the left fringe glyph. */ unsigned left_user_fringe_face_id : FACE_ID_BITS; /* Face of the right fringe glyph. */ unsigned right_user_fringe_face_id : FACE_ID_BITS; /* Face of the left fringe glyph. */ unsigned left_fringe_face_id : FACE_ID_BITS; /* Face of the right fringe glyph. */ unsigned right_fringe_face_id : FACE_ID_BITS; /* 1 means that we must draw the bitmaps of this row. */ unsigned redraw_fringe_bitmaps_p : 1; /* In a desired matrix, 1 means that this row must be updated. In a current matrix, 0 means that the row has been invalidated, i.e. the row's contents do not agree with what is visible on the screen. */ unsigned enabled_p : 1; /* 1 means row displays a text line that is truncated on the left or right side. */ unsigned truncated_on_left_p : 1; unsigned truncated_on_right_p : 1; /* 1 means that this row displays a continued line, i.e. it has a continuation mark at the right side. */ unsigned continued_p : 1; /* 0 means that this row does not contain any text, i.e. it is a blank line at the window and buffer end. */ unsigned displays_text_p : 1; /* 1 means that this line ends at ZV. */ unsigned ends_at_zv_p : 1; /* 1 means the face of the last glyph in the text area is drawn to the right end of the window. This flag is used in update_text_area to optimize clearing to the end of the area. */ unsigned fill_line_p : 1; /* Non-zero means display a bitmap on X frames indicating that this line contains no text and ends in ZV. */ unsigned indicate_empty_line_p : 1; /* 1 means this row contains glyphs that overlap each other because of lbearing or rbearing. */ unsigned contains_overlapping_glyphs_p : 1; /* 1 means this row is as wide as the window it is displayed in, including scroll bars, fringes, and internal borders. This also implies that the row doesn't have marginal areas. */ unsigned full_width_p : 1; /* Non-zero means row is a mode or header-line. */ unsigned mode_line_p : 1; /* 1 in a current row means this row is overlapped by another row. */ unsigned overlapped_p : 1; /* 1 means this line ends in the middle of a character consisting of more than one glyph. Some glyphs have been put in this row, the rest are put in rows below this one. */ unsigned ends_in_middle_of_char_p : 1; /* 1 means this line starts in the middle of a character consisting of more than one glyph. Some glyphs have been put in the previous row, the rest are put in this row. */ unsigned starts_in_middle_of_char_p : 1; /* 1 in a current row means this row overlaps others. */ unsigned overlapping_p : 1; /* 1 means some glyphs in this row are displayed in mouse-face. */ unsigned mouse_face_p : 1; /* 1 means this row was ended by a newline from a string. */ unsigned ends_in_newline_from_string_p : 1; /* 1 means this row width is exactly the width of the window, and the final newline character is hidden in the right fringe. */ unsigned exact_window_width_line_p : 1; /* 1 means this row currently shows the cursor in the right fringe. */ unsigned cursor_in_fringe_p : 1; /* 1 means the last glyph in the row is part of an ellipsis. */ unsigned ends_in_ellipsis_p : 1; /* Non-zero means display a bitmap on X frames indicating that this the first line of the buffer. */ unsigned indicate_bob_p : 1; /* Non-zero means display a bitmap on X frames indicating that this the top line of the window, but not start of the buffer. */ unsigned indicate_top_line_p : 1; /* Non-zero means display a bitmap on X frames indicating that this the last line of the buffer. */ unsigned indicate_eob_p : 1; /* Non-zero means display a bitmap on X frames indicating that this the bottom line of the window, but not end of the buffer. */ unsigned indicate_bottom_line_p : 1; /* Continuation lines width at the start of the row. */ int continuation_lines_width; }; /* Get a pointer to row number ROW in matrix MATRIX. If GLYPH_DEBUG is defined to a non-zero value, the function matrix_row checks that we don't try to access rows that are out of bounds. */ #if GLYPH_DEBUG struct glyph_row *matrix_row P_ ((struct glyph_matrix *, int)); #define MATRIX_ROW(MATRIX, ROW) matrix_row ((MATRIX), (ROW)) #else #define MATRIX_ROW(MATRIX, ROW) ((MATRIX)->rows + (ROW)) #endif /* Return a pointer to the row reserved for the mode line in MATRIX. Row MATRIX->nrows - 1 is always reserved for the mode line. */ #define MATRIX_MODE_LINE_ROW(MATRIX) \ ((MATRIX)->rows + (MATRIX)->nrows - 1) /* Return a pointer to the row reserved for the header line in MATRIX. This is always the first row in MATRIX because that's the only way that works in frame-based redisplay. */ #define MATRIX_HEADER_LINE_ROW(MATRIX) (MATRIX)->rows /* Return a pointer to first row in MATRIX used for text display. */ #define MATRIX_FIRST_TEXT_ROW(MATRIX) \ ((MATRIX)->rows->mode_line_p ? (MATRIX)->rows + 1 : (MATRIX)->rows) /* Return a pointer to the first glyph in the text area of a row. MATRIX is the glyph matrix accessed, and ROW is the row index in MATRIX. */ #define MATRIX_ROW_GLYPH_START(MATRIX, ROW) \ (MATRIX_ROW ((MATRIX), (ROW))->glyphs[TEXT_AREA]) /* Return the number of used glyphs in the text area of a row. */ #define MATRIX_ROW_USED(MATRIX, ROW) \ (MATRIX_ROW ((MATRIX), (ROW))->used[TEXT_AREA]) /* Return the character/ byte position at which the display of ROW starts. */ #define MATRIX_ROW_START_CHARPOS(ROW) ((ROW)->start.pos.charpos) #define MATRIX_ROW_START_BYTEPOS(ROW) ((ROW)->start.pos.bytepos) /* Return the character/ byte position at which ROW ends. */ #define MATRIX_ROW_END_CHARPOS(ROW) ((ROW)->end.pos.charpos) #define MATRIX_ROW_END_BYTEPOS(ROW) ((ROW)->end.pos.bytepos) /* Return the vertical position of ROW in MATRIX. */ #define MATRIX_ROW_VPOS(ROW, MATRIX) ((ROW) - (MATRIX)->rows) /* Return the last glyph row + 1 in MATRIX on window W reserved for text. If W has a mode line, the last row in the matrix is reserved for it. */ #define MATRIX_BOTTOM_TEXT_ROW(MATRIX, W) \ ((MATRIX)->rows \ + (MATRIX)->nrows \ - (WINDOW_WANTS_MODELINE_P ((W)) ? 1 : 0)) /* Non-zero if the face of the last glyph in ROW's text area has to be drawn to the end of the text area. */ #define MATRIX_ROW_EXTENDS_FACE_P(ROW) ((ROW)->fill_line_p) /* Set and query the enabled_p flag of glyph row ROW in MATRIX. */ #define SET_MATRIX_ROW_ENABLED_P(MATRIX, ROW, VALUE) \ (MATRIX_ROW ((MATRIX), (ROW))->enabled_p = (VALUE) != 0) #define MATRIX_ROW_ENABLED_P(MATRIX, ROW) \ (MATRIX_ROW ((MATRIX), (ROW))->enabled_p) /* Non-zero if ROW displays text. Value is non-zero if the row is blank but displays a line end. */ #define MATRIX_ROW_DISPLAYS_TEXT_P(ROW) ((ROW)->displays_text_p) /* Helper macros */ #define MR_PARTIALLY_VISIBLE(ROW) \ ((ROW)->height != (ROW)->visible_height) #define MR_PARTIALLY_VISIBLE_AT_TOP(W, ROW) \ ((ROW)->y < WINDOW_HEADER_LINE_HEIGHT ((W))) #define MR_PARTIALLY_VISIBLE_AT_BOTTOM(W, ROW) \ (((ROW)->y + (ROW)->height - (ROW)->extra_line_spacing) \ > WINDOW_BOX_HEIGHT_NO_MODE_LINE ((W))) /* Non-zero if ROW is not completely visible in window W. */ #define MATRIX_ROW_PARTIALLY_VISIBLE_P(W, ROW) \ (MR_PARTIALLY_VISIBLE ((ROW)) \ && (MR_PARTIALLY_VISIBLE_AT_TOP ((W), (ROW)) \ || MR_PARTIALLY_VISIBLE_AT_BOTTOM ((W), (ROW)))) /* Non-zero if ROW is partially visible at the top of window W. */ #define MATRIX_ROW_PARTIALLY_VISIBLE_AT_TOP_P(W, ROW) \ (MR_PARTIALLY_VISIBLE ((ROW)) \ && MR_PARTIALLY_VISIBLE_AT_TOP ((W), (ROW))) /* Non-zero if ROW is partially visible at the bottom of window W. */ #define MATRIX_ROW_PARTIALLY_VISIBLE_AT_BOTTOM_P(W, ROW) \ (MR_PARTIALLY_VISIBLE ((ROW)) \ && MR_PARTIALLY_VISIBLE_AT_BOTTOM ((W), (ROW))) /* Return the bottom Y + 1 of ROW. */ #define MATRIX_ROW_BOTTOM_Y(ROW) ((ROW)->y + (ROW)->height) /* Is ROW the last visible one in the display described by the iterator structure pointed to by IT?. */ #define MATRIX_ROW_LAST_VISIBLE_P(ROW, IT) \ (MATRIX_ROW_BOTTOM_Y ((ROW)) >= (IT)->last_visible_y) /* Non-zero if ROW displays a continuation line. */ #define MATRIX_ROW_CONTINUATION_LINE_P(ROW) \ ((ROW)->continuation_lines_width > 0) /* Non-zero if ROW ends in the middle of a character. This is the case for continued lines showing only part of a display table entry or a control char, or an overlay string. */ #define MATRIX_ROW_ENDS_IN_MIDDLE_OF_CHAR_P(ROW) \ ((ROW)->end.dpvec_index > 0 \ || (ROW)->end.overlay_string_index >= 0 \ || (ROW)->ends_in_middle_of_char_p) /* Non-zero if ROW ends in the middle of an overlay string. */ #define MATRIX_ROW_ENDS_IN_OVERLAY_STRING_P(ROW) \ ((ROW)->end.overlay_string_index >= 0) /* Non-zero if ROW starts in the middle of a character. See above. */ #define MATRIX_ROW_STARTS_IN_MIDDLE_OF_CHAR_P(ROW) \ ((ROW)->start.dpvec_index > 0 \ || (ROW)->starts_in_middle_of_char_p \ || ((ROW)->start.overlay_string_index >= 0 \ && (ROW)->start.string_pos.charpos > 0)) /* Non-zero means ROW overlaps its predecessor. */ #define MATRIX_ROW_OVERLAPS_PRED_P(ROW) \ ((ROW)->phys_ascent > (ROW)->ascent) /* Non-zero means ROW overlaps its successor. */ #define MATRIX_ROW_OVERLAPS_SUCC_P(ROW) \ ((ROW)->phys_height - (ROW)->phys_ascent \ > (ROW)->height - (ROW)->ascent) /* Non-zero means that fonts have been loaded since the last glyph matrix adjustments. The function redisplay_internal adjusts glyph matrices when this flag is non-zero. */ extern int fonts_changed_p; /* A glyph for a space. */ extern struct glyph space_glyph; /* Frame being updated by update_window/update_frame. */ extern struct frame *updating_frame; /* Window being updated by update_window. This is non-null as long as update_window has not finished, and null otherwise. It's role is analogous to updating_frame. */ extern struct window *updated_window; /* Glyph row and area updated by update_window_line. */ extern struct glyph_row *updated_row; extern int updated_area; /* Non-zero means reading single-character input with prompt so put cursor on mini-buffer after the prompt. Positive means at end of text in echo area; negative means at beginning of line. */ extern int cursor_in_echo_area; /* Non-zero means last display completed. Zero means it was preempted. */ extern int display_completed; /* Non-zero means redisplay has been performed directly (see also direct_output_for_insert and direct_output_forward_char), so that no further updating has to be performed. The function redisplay_internal checks this flag, and does nothing but reset it to zero if it is non-zero. */ extern int redisplay_performed_directly_p; /* A temporary storage area, including a row of glyphs. Initialized in xdisp.c. Used for various purposes, as an example see direct_output_for_insert. */ extern struct glyph_row scratch_glyph_row; /************************************************************************ Glyph Strings ************************************************************************/ /* Enumeration for overriding/changing the face to use for drawing glyphs in draw_glyphs. */ enum draw_glyphs_face { DRAW_NORMAL_TEXT, DRAW_INVERSE_VIDEO, DRAW_CURSOR, DRAW_MOUSE_FACE, DRAW_IMAGE_RAISED, DRAW_IMAGE_SUNKEN }; #ifdef HAVE_WINDOW_SYSTEM /* A sequence of glyphs to be drawn in the same face. */ struct glyph_string { /* X-origin of the string. */ int x; /* Y-origin and y-position of the base line of this string. */ int y, ybase; /* The width of the string, not including a face extension. */ int width; /* The width of the string, including a face extension. */ int background_width; /* The height of this string. This is the height of the line this string is drawn in, and can be different from the height of the font the string is drawn in. */ int height; /* Number of pixels this string overwrites in front of its x-origin. This number is zero if the string has an lbearing >= 0; it is -lbearing, if the string has an lbearing < 0. */ int left_overhang; /* Number of pixels this string overwrites past its right-most nominal x-position, i.e. x + width. Zero if the string's rbearing is <= its nominal width, rbearing - width otherwise. */ int right_overhang; /* The frame on which the glyph string is drawn. */ struct frame *f; /* The window on which the glyph string is drawn. */ struct window *w; /* X display and window for convenience. */ Display *display; Window window; /* The glyph row for which this string was built. It determines the y-origin and height of the string. */ struct glyph_row *row; /* The area within row. */ enum glyph_row_area area; /* Characters to be drawn, and number of characters. */ XChar2b *char2b; int nchars; /* A face-override for drawing cursors, mouse face and similar. */ enum draw_glyphs_face hl; /* Face in which this string is to be drawn. */ struct face *face; /* Font in which this string is to be drawn. */ XFontStruct *font; /* Font info for this string. */ struct font_info *font_info; /* Non-null means this string describes (part of) a composition. All characters from char2b are drawn composed. */ struct composition *cmp; /* Index of this glyph string's first character in the glyph definition of CMP. If this is zero, this glyph string describes the first character of a composition. */ int gidx; /* 1 means this glyph strings face has to be drawn to the right end of the window's drawing area. */ unsigned extends_to_end_of_line_p : 1; /* 1 means the background of this string has been drawn. */ unsigned background_filled_p : 1; /* 1 means glyph string must be drawn with 16-bit functions. */ unsigned two_byte_p : 1; /* 1 means that the original font determined for drawing this glyph string could not be loaded. The member `font' has been set to the frame's default font in this case. */ unsigned font_not_found_p : 1; /* 1 means that the face in which this glyph string is drawn has a stipple pattern. */ unsigned stippled_p : 1; #define OVERLAPS_PRED (1 << 0) #define OVERLAPS_SUCC (1 << 1) #define OVERLAPS_BOTH (OVERLAPS_PRED | OVERLAPS_SUCC) #define OVERLAPS_ERASED_CURSOR (1 << 2) /* Non-zero means only the foreground of this glyph string must be drawn, and we should use the physical height of the line this glyph string appears in as clip rect. If the value is OVERLAPS_ERASED_CURSOR, the clip rect is restricted to the rect of the erased cursor. OVERLAPS_PRED and OVERLAPS_SUCC mean we draw overlaps with the preceding and the succeeding rows, respectively. */ unsigned for_overlaps : 3; /* The GC to use for drawing this glyph string. */ #if defined(HAVE_X_WINDOWS) || defined(MAC_OS) GC gc; #endif #if defined(HAVE_NTGUI) XGCValues *gc; HDC hdc; #endif /* A pointer to the first glyph in the string. This glyph corresponds to char2b[0]. Needed to draw rectangles if font_not_found_p is 1. */ struct glyph *first_glyph; /* Image, if any. */ struct image *img; /* Slice */ struct glyph_slice slice; /* Non-null means the horizontal clipping region starts from the left edge of *clip_head, and ends with the right edge of *clip_tail, not including their overhangs. */ struct glyph_string *clip_head, *clip_tail; struct glyph_string *next, *prev; }; #endif /* HAVE_WINDOW_SYSTEM */ /************************************************************************ Display Dimensions ************************************************************************/ /* Return the height of the mode line in glyph matrix MATRIX, or zero if not known. This macro is called under circumstances where MATRIX might not have been allocated yet. */ #define MATRIX_MODE_LINE_HEIGHT(MATRIX) \ ((MATRIX) && (MATRIX)->rows \ ? MATRIX_MODE_LINE_ROW (MATRIX)->height \ : 0) /* Return the height of the header line in glyph matrix MATRIX, or zero if not known. This macro is called under circumstances where MATRIX might not have been allocated yet. */ #define MATRIX_HEADER_LINE_HEIGHT(MATRIX) \ ((MATRIX) && (MATRIX)->rows \ ? MATRIX_HEADER_LINE_ROW (MATRIX)->height \ : 0) /* Return the desired face id for the mode line of a window, depending on whether the window is selected or not, or if the window is the scrolling window for the currently active minibuffer window. Due to the way display_mode_lines manipulates with the contents of selected_window, this macro needs three arguments: SELW which is compared against the current value of selected_window, MBW which is compared against minibuf_window (if SELW doesn't match), and SCRW which is compared against minibuf_selected_window (if MBW matches). */ #define CURRENT_MODE_LINE_FACE_ID_3(SELW, MBW, SCRW) \ ((!mode_line_in_non_selected_windows \ || (SELW) == XWINDOW (selected_window) \ || (minibuf_level > 0 \ && !NILP (minibuf_selected_window) \ && (MBW) == XWINDOW (minibuf_window) \ && (SCRW) == XWINDOW (minibuf_selected_window))) \ ? MODE_LINE_FACE_ID \ : MODE_LINE_INACTIVE_FACE_ID) /* Return the desired face id for the mode line of window W. */ #define CURRENT_MODE_LINE_FACE_ID(W) \ (CURRENT_MODE_LINE_FACE_ID_3((W), XWINDOW (selected_window), (W))) /* Return the current height of the mode line of window W. If not known from current_mode_line_height, look at W's current glyph matrix, or return a default based on the height of the font of the face `mode-line'. */ #define CURRENT_MODE_LINE_HEIGHT(W) \ (current_mode_line_height >= 0 \ ? current_mode_line_height \ : (MATRIX_MODE_LINE_HEIGHT ((W)->current_matrix) \ ? MATRIX_MODE_LINE_HEIGHT ((W)->current_matrix) \ : estimate_mode_line_height (XFRAME ((W)->frame), \ CURRENT_MODE_LINE_FACE_ID (W)))) /* Return the current height of the header line of window W. If not known from current_header_line_height, look at W's current glyph matrix, or return an estimation based on the height of the font of the face `header-line'. */ #define CURRENT_HEADER_LINE_HEIGHT(W) \ (current_header_line_height >= 0 \ ? current_header_line_height \ : (MATRIX_HEADER_LINE_HEIGHT ((W)->current_matrix) \ ? MATRIX_HEADER_LINE_HEIGHT ((W)->current_matrix) \ : estimate_mode_line_height (XFRAME ((W)->frame), \ HEADER_LINE_FACE_ID))) /* Return the height of the desired mode line of window W. */ #define DESIRED_MODE_LINE_HEIGHT(W) \ MATRIX_MODE_LINE_HEIGHT ((W)->desired_matrix) /* Return the height of the desired header line of window W. */ #define DESIRED_HEADER_LINE_HEIGHT(W) \ MATRIX_HEADER_LINE_HEIGHT ((W)->desired_matrix) /* Value is non-zero if window W wants a mode line. */ #define WINDOW_WANTS_MODELINE_P(W) \ (!MINI_WINDOW_P ((W)) \ && !(W)->pseudo_window_p \ && FRAME_WANTS_MODELINE_P (XFRAME (WINDOW_FRAME ((W)))) \ && BUFFERP ((W)->buffer) \ && !NILP (XBUFFER ((W)->buffer)->mode_line_format) \ && WINDOW_TOTAL_LINES (W) > 1) /* Value is non-zero if window W wants a header line. */ #define WINDOW_WANTS_HEADER_LINE_P(W) \ (!MINI_WINDOW_P ((W)) \ && !(W)->pseudo_window_p \ && FRAME_WANTS_MODELINE_P (XFRAME (WINDOW_FRAME ((W)))) \ && BUFFERP ((W)->buffer) \ && !NILP (XBUFFER ((W)->buffer)->header_line_format) \ && WINDOW_TOTAL_LINES (W) > 1 + !NILP (XBUFFER ((W)->buffer)->mode_line_format)) /* Return proper value to be used as baseline offset of font that has ASCENT and DESCENT to draw characters by the font at the vertical center of the line of frame F. Here, our task is to find the value of BOFF in the following figure; -------------------------+-----------+- -+-+---------+-+ | | | | | | | | | | | | F_ASCENT F_HEIGHT | | | ASCENT | | HEIGHT | | | | | | | |-|-+------+-----------|------- baseline | | | | BOFF | | | |---------|-+-+ | | | | | DESCENT | | -+-+---------+-+ F_DESCENT | -------------------------+-----------+- -BOFF + DESCENT + (F_HEIGHT - HEIGHT) / 2 = F_DESCENT BOFF = DESCENT + (F_HEIGHT - HEIGHT) / 2 - F_DESCENT DESCENT = FONT->descent HEIGHT = FONT_HEIGHT (FONT) F_DESCENT = (FRAME_FONT (F)->descent - F->output_data.x->baseline_offset) F_HEIGHT = FRAME_LINE_HEIGHT (F) */ #define VCENTER_BASELINE_OFFSET(FONT, F) \ (FONT_DESCENT (FONT) \ + (FRAME_LINE_HEIGHT ((F)) - FONT_HEIGHT ((FONT)) \ + (FRAME_LINE_HEIGHT ((F)) > FONT_HEIGHT ((FONT)))) / 2 \ - (FONT_DESCENT (FRAME_FONT (F)) - FRAME_BASELINE_OFFSET (F))) /*********************************************************************** Faces ***********************************************************************/ /* Indices of face attributes in Lisp face vectors. Slot zero is the symbol `face'. */ enum lface_attribute_index { LFACE_FAMILY_INDEX = 1, LFACE_SWIDTH_INDEX, LFACE_HEIGHT_INDEX, LFACE_WEIGHT_INDEX, LFACE_SLANT_INDEX, LFACE_UNDERLINE_INDEX, LFACE_INVERSE_INDEX, LFACE_FOREGROUND_INDEX, LFACE_BACKGROUND_INDEX, LFACE_STIPPLE_INDEX, LFACE_OVERLINE_INDEX, LFACE_STRIKE_THROUGH_INDEX, LFACE_BOX_INDEX, LFACE_FONT_INDEX, LFACE_INHERIT_INDEX, LFACE_AVGWIDTH_INDEX, LFACE_VECTOR_SIZE }; /* Box types of faces. */ enum face_box_type { /* No box around text. */ FACE_NO_BOX, /* Simple box of specified width and color. Default width is 1, and default color is the foreground color of the face. */ FACE_SIMPLE_BOX, /* Boxes with 3D shadows. Color equals the background color of the face. Width is specified. */ FACE_RAISED_BOX, FACE_SUNKEN_BOX }; /* Structure describing a realized face. For each Lisp face, 0..N realized faces can exist for different frames and different charsets. Realized faces are built from Lisp faces and text properties/overlays by merging faces and adding unspecified attributes from the `default' face. */ struct face { /* The id of this face. The id equals the index of this face in the vector faces_by_id of its face cache. */ int id; #ifdef HAVE_WINDOW_SYSTEM /* If non-zero, this is a GC that we can use without modification for drawing the characters in this face. */ GC gc; /* Font used for this face, or null if the font could not be loaded for some reason. This points to a `font' slot of a struct font_info, and we should not call XFreeFont on it because the font may still be used somewhere else. */ XFontStruct *font; /* Background stipple or bitmap used for this face. This is an id as returned from load_pixmap. */ int stipple; #else /* not HAVE_WINDOW_SYSTEM */ /* Dummy. */ int stipple; #endif /* not HAVE_WINDOW_SYSTEM */ /* Pixel value of foreground color for X frames. Color index for tty frames. */ unsigned long foreground; /* Pixel value or color index of background color. */ unsigned long background; /* Pixel value or color index of underline color. */ unsigned long underline_color; /* Pixel value or color index of overlined, strike-through, or box color. */ unsigned long overline_color; unsigned long strike_through_color; unsigned long box_color; /* The font's name. This points to a `name' of a font_info, and it must not be freed. */ char *font_name; /* Font info ID for this face's font. An ID is stored here because pointers to font_info structures may change. The reason is that they are pointers into a font table vector that is itself reallocated. */ int font_info_id; /* Fontset ID if this face uses a fontset, or -1. This is only >= 0 if the face was realized for a composition sequence. Otherwise, a specific font is loaded from the set of fonts specified by the fontset given by the family attribute of the face. */ int fontset; /* Pixmap width and height. */ unsigned int pixmap_w, pixmap_h; /* Non-zero means characters in this face have a box that thickness around them. If it is negative, the absolute value indicates the thickness, and the horizontal lines of box (top and bottom) are drawn inside of characters glyph area. The vertical lines of box (left and right) are drawn as the same way as the case that this value is positive. */ int box_line_width; /* Type of box drawn. A value of FACE_NO_BOX means no box is drawn around text in this face. A value of FACE_SIMPLE_BOX means a box of width box_line_width is drawn in color box_color. A value of FACE_RAISED_BOX or FACE_SUNKEN_BOX means a 3D box is drawn with shadow colors derived from the background color of the face. */ enum face_box_type box; /* If `box' above specifies a 3D type, 1 means use box_color for drawing shadows. */ unsigned use_box_color_for_shadows_p : 1; /* The Lisp face attributes this face realizes. All attributes in this vector are non-nil. */ Lisp_Object lface[LFACE_VECTOR_SIZE]; /* The hash value of this face. */ unsigned hash; /* The charset for which this face was realized if it was realized for use in multibyte text. If fontset >= 0, this is the charset of the first character of the composition sequence. A value of charset < 0 means the face was realized for use in unibyte text where the idea of Emacs charsets isn't applicable. */ int charset; /* Non-zero if text in this face should be underlined, overlined, strike-through or have a box drawn around it. */ unsigned underline_p : 1; unsigned overline_p : 1; unsigned strike_through_p : 1; /* 1 means that the colors specified for this face could not be loaded, and were replaced by default colors, so they shouldn't be freed. */ unsigned foreground_defaulted_p : 1; unsigned background_defaulted_p : 1; /* 1 means that either no color is specified for underlining or that the specified color couldn't be loaded. Use the foreground color when drawing in that case. */ unsigned underline_defaulted_p : 1; /* 1 means that either no color is specified for the corresponding attribute or that the specified color couldn't be loaded. Use the foreground color when drawing in that case. */ unsigned overline_color_defaulted_p : 1; unsigned strike_through_color_defaulted_p : 1; unsigned box_color_defaulted_p : 1; /* TTY appearances. Blinking is not yet implemented. Colors are found in `lface' with empty color string meaning the default color of the TTY. */ unsigned tty_bold_p : 1; unsigned tty_dim_p : 1; unsigned tty_underline_p : 1; unsigned tty_alt_charset_p : 1; unsigned tty_reverse_p : 1; unsigned tty_blinking_p : 1; /* 1 means that colors of this face may not be freed because they have been copied bitwise from a base face (see realize_x_face). */ unsigned colors_copied_bitwise_p : 1; /* If non-zero, use overstrike (to simulate bold-face). */ unsigned overstrike : 1; /* Next and previous face in hash collision list of face cache. */ struct face *next, *prev; /* If this face is for ASCII characters, this points this face itself. Otherwise, this points a face for ASCII characters. */ struct face *ascii_face; }; /* Color index indicating that face uses a terminal's default color. */ #define FACE_TTY_DEFAULT_COLOR ((unsigned long) -1) /* Color index indicating that face uses an unknown foreground color. */ #define FACE_TTY_DEFAULT_FG_COLOR ((unsigned long) -2) /* Color index indicating that face uses an unknown background color. */ #define FACE_TTY_DEFAULT_BG_COLOR ((unsigned long) -3) /* Non-zero if FACE was realized for unibyte use. */ #define FACE_UNIBYTE_P(FACE) ((FACE)->charset < 0) /* IDs of important faces known by the C face code. These are the IDs of the faces for CHARSET_ASCII. */ enum face_id { DEFAULT_FACE_ID, MODE_LINE_FACE_ID, MODE_LINE_INACTIVE_FACE_ID, TOOL_BAR_FACE_ID, FRINGE_FACE_ID, HEADER_LINE_FACE_ID, SCROLL_BAR_FACE_ID, BORDER_FACE_ID, CURSOR_FACE_ID, MOUSE_FACE_ID, MENU_FACE_ID, VERTICAL_BORDER_FACE_ID, BASIC_FACE_ID_SENTINEL }; #define MAX_FACE_ID ((1 << FACE_ID_BITS) - 1) /* A cache of realized faces. Each frame has its own cache because Emacs allows different frame-local face definitions. */ struct face_cache { /* Hash table of cached realized faces. */ struct face **buckets; /* Back-pointer to the frame this cache belongs to. */ struct frame *f; /* A vector of faces so that faces can be referenced by an ID. */ struct face **faces_by_id; /* The allocated size, and number of used slots of faces_by_id. */ int size, used; /* Flag indicating that attributes of the `menu' face have been changed. */ unsigned menu_face_changed_p : 1; }; /* Prepare face FACE for use on frame F. This must be called before using X resources of FACE. */ #define PREPARE_FACE_FOR_DISPLAY(F, FACE) \ if ((FACE)->gc == 0) \ prepare_face_for_display ((F), (FACE)); \ else \ (void) 0 /* Return a pointer to the face with ID on frame F, or null if such a face doesn't exist. */ #define FACE_FROM_ID(F, ID) \ (((unsigned) (ID) < FRAME_FACE_CACHE (F)->used) \ ? FRAME_FACE_CACHE (F)->faces_by_id[ID] \ : NULL) #ifdef HAVE_WINDOW_SYSTEM /* Non-zero if FACE is suitable for displaying character CHAR. */ #define FACE_SUITABLE_FOR_CHAR_P(FACE, CHAR) \ (SINGLE_BYTE_CHAR_P (CHAR) \ ? (FACE) == (FACE)->ascii_face \ : face_suitable_for_char_p ((FACE), (CHAR))) /* Return the id of the realized face on frame F that is like the face with id ID but is suitable for displaying character CHAR. This macro is only meaningful for multibyte character CHAR. */ #define FACE_FOR_CHAR(F, FACE, CHAR) \ (SINGLE_BYTE_CHAR_P (CHAR) \ ? (FACE)->ascii_face->id \ : face_for_char ((F), (FACE), (CHAR))) #else /* not HAVE_WINDOW_SYSTEM */ #define FACE_SUITABLE_FOR_CHAR_P(FACE, CHAR) 1 #define FACE_FOR_CHAR(F, FACE, CHAR) ((FACE)->id) #endif /* not HAVE_WINDOW_SYSTEM */ /* Non-zero means face attributes have been changed since the last redisplay. Used in redisplay_internal. */ extern int face_change_count; /*********************************************************************** Fringes ***********************************************************************/ /* Structure used to describe where and how to draw a fringe bitmap. WHICH is the fringe bitmap to draw. WD and H is the (adjusted) width and height of the bitmap, DH is the height adjustment (if bitmap is periodic). X and Y are frame coordinates of the area to display the bitmap, DY is relative offset of the bitmap into that area. BX, NX, BY, NY specifies the area to clear if the bitmap does not fill the entire area. FACE is the fringe face. */ struct draw_fringe_bitmap_params { int which; /* enum fringe_bitmap_type */ unsigned short *bits; int wd, h, dh; int x, y; int bx, nx, by, ny; unsigned cursor_p : 1; unsigned overlay_p : 1; struct face *face; }; #define MAX_FRINGE_BITMAPS (1<<FRINGE_ID_BITS) /*********************************************************************** Display Iterator ***********************************************************************/ /* Iteration over things to display in current_buffer or in a string. The iterator handles: 1. Overlay strings (after-string, before-string). 2. Face properties. 3. Invisible text properties. 4. Selective display. 5. Translation of characters via display tables. 6. Translation of control characters to the forms `\003' or `^C'. 7. `glyph' and `space-width' properties. Iterators are initialized by calling init_iterator or one of the equivalent functions below. A call to get_next_display_element loads the iterator structure with information about what next to display. A call to set_iterator_to_next increments the iterator's position. Characters from overlay strings, display table entries or control character translations are returned one at a time. For example, if we have a text of `a\x01' where `a' has a display table definition of `cd' and the control character is displayed with a leading arrow, then the iterator will return: Call Return Source Call next ----------------------------------------------------------------- next c display table move next d display table move next ^ control char move next A control char move The same mechanism is also used to return characters for ellipses displayed at the end of invisible text. CAVEAT: Under some circumstances, move_.* functions can be called asynchronously, e.g. when computing a buffer position from an x and y pixel position. This means that these functions and functions called from them SHOULD NOT USE xmalloc and alike. See also the comment at the start of xdisp.c. */ /* Enumeration describing what kind of display element an iterator is loaded with after a call to get_next_display_element. */ enum display_element_type { /* A normal character. */ IT_CHARACTER, /* A composition sequence. */ IT_COMPOSITION, /* An image. */ IT_IMAGE, /* A flexible width and height space. */ IT_STRETCH, /* End of buffer or string. */ IT_EOB, /* Truncation glyphs. Never returned by get_next_display_element. Used to get display information about truncation glyphs via produce_glyphs. */ IT_TRUNCATION, /* Continuation glyphs. See the comment for IT_TRUNCATION. */ IT_CONTINUATION }; /* An enumerator for each text property that has a meaning for display purposes. */ enum prop_idx { FONTIFIED_PROP_IDX, FACE_PROP_IDX, INVISIBLE_PROP_IDX, DISPLAY_PROP_IDX, COMPOSITION_PROP_IDX, /* Not a property. Used to indicate changes in overlays. */ OVERLAY_PROP_IDX, /* Sentinel. */ LAST_PROP_IDX }; struct it_slice { Lisp_Object x; Lisp_Object y; Lisp_Object width; Lisp_Object height; }; enum it_method { GET_FROM_BUFFER = 0, GET_FROM_DISPLAY_VECTOR, GET_FROM_COMPOSITION, GET_FROM_STRING, GET_FROM_C_STRING, GET_FROM_IMAGE, GET_FROM_STRETCH, NUM_IT_METHODS }; #define IT_STACK_SIZE 4 struct it { /* The window in which we iterate over current_buffer (or a string). */ Lisp_Object window; struct window *w; /* The window's frame. */ struct frame *f; /* Method to use to load this structure with the next display element. */ enum it_method method; /* The next position at which to check for face changes, invisible text, overlay strings, end of text etc., which see. */ int stop_charpos; /* Maximum string or buffer position + 1. ZV when iterating over current_buffer. */ int end_charpos; /* C string to iterate over. Non-null means get characters from this string, otherwise characters are read from current_buffer or it->string. */ unsigned char *s; /* Number of characters in the string (s, or it->string) we iterate over. */ int string_nchars; /* Start and end of a visible region; -1 if the region is not visible in the window. */ int region_beg_charpos, region_end_charpos; /* Position at which redisplay end trigger functions should be run. */ int redisplay_end_trigger_charpos; /* 1 means multibyte characters are enabled. */ unsigned multibyte_p : 1; /* 1 means window has a mode line at its top. */ unsigned header_line_p : 1; /* 1 means `string' is the value of a `display' property. Don't handle some `display' properties in these strings. */ unsigned string_from_display_prop_p : 1; /* When METHOD == next_element_from_display_vector, this is 1 if we're doing an ellipsis. Otherwise meaningless. */ unsigned ellipsis_p : 1; /* Display table in effect or null for none. */ struct Lisp_Char_Table *dp; /* Current display table vector to return characters from and its end. dpvec null means we are not returning characters from a display table entry; current.dpvec_index gives the current index into dpvec. This same mechanism is also used to return characters from translated control characters, i.e. `\003' or `^C'. */ Lisp_Object *dpvec, *dpend; /* Length in bytes of the char that filled dpvec. A value of zero means that no such character is involved. */ int dpvec_char_len; /* Face id to use for all characters in display vector. -1 if unused. */ int dpvec_face_id; /* Face id of the iterator saved in case a glyph from dpvec contains a face. The face is restored when all glyphs from dpvec have been delivered. */ int saved_face_id; /* Vector of glyphs for control character translation. The pointer dpvec is set to ctl_chars when a control character is translated. This vector is also used for incomplete multibyte character translation (e.g \222\244). Such a character is at most 4 bytes, thus we need at most 16 bytes here. */ Lisp_Object ctl_chars[16]; /* Initial buffer or string position of the iterator, before skipping over display properties and invisible text. */ struct display_pos start; /* Current buffer or string position of the iterator, including position in overlay strings etc. */ struct display_pos current; /* Vector of overlays to process. Overlay strings are processed OVERLAY_STRING_CHUNK_SIZE at a time. */ #define OVERLAY_STRING_CHUNK_SIZE 16 Lisp_Object overlay_strings[OVERLAY_STRING_CHUNK_SIZE]; /* Total number of overlay strings to process. This can be > OVERLAY_STRING_CHUNK_SIZE. */ int n_overlay_strings; /* If non-nil, a Lisp string being processed. If current.overlay_string_index >= 0, this is an overlay string from pos. */ Lisp_Object string; /* Stack of saved values. New entries are pushed when we begin to process an overlay string or a string from a `glyph' property. Entries are popped when we return to deliver display elements from what we previously had. */ struct iterator_stack_entry { Lisp_Object string; int string_nchars; int end_charpos; int stop_charpos; int face_id; /* Save values specific to a given method. */ union { /* method == GET_FROM_IMAGE */ struct { Lisp_Object object; struct it_slice slice; int image_id; } image; /* method == GET_FROM_COMPOSITION */ struct { Lisp_Object object; int c, len; int cmp_id, cmp_len; } comp; /* method == GET_FROM_STRETCH */ struct { Lisp_Object object; } stretch; } u; /* current text and display positions. */ struct text_pos position; struct display_pos current; enum glyph_row_area area; enum it_method method; unsigned multibyte_p : 1; unsigned string_from_display_prop_p : 1; unsigned display_ellipsis_p : 1; /* properties from display property that are reset by another display property. */ Lisp_Object space_width; Lisp_Object font_height; short voffset; } stack[IT_STACK_SIZE]; /* Stack pointer. */ int sp; /* Setting of buffer-local variable selective-display-ellipsis. */ unsigned selective_display_ellipsis_p : 1; /* 1 means control characters are translated into the form `^C' where the `^' can be replaced by a display table entry. */ unsigned ctl_arrow_p : 1; /* -1 means selective display hides everything between a \r and the next newline; > 0 means hide lines indented more than that value. */ int selective; /* An enumeration describing what the next display element is after a call to get_next_display_element. */ enum display_element_type what; /* Face to use. */ int face_id; /* Non-zero means that the current face has a box. */ unsigned face_box_p : 1; /* Non-null means that the current character is the first in a run of characters with box face. */ unsigned start_of_box_run_p : 1; /* Non-zero means that the current character is the last in a run of characters with box face. */ unsigned end_of_box_run_p : 1; /* 1 means overlay strings at end_charpos have been processed. */ unsigned overlay_strings_at_end_processed_p : 1; /* 1 means to ignore overlay strings at current pos, as they have already been processed. */ unsigned ignore_overlay_strings_at_pos_p : 1; /* 1 means the actual glyph is not available in the current system. */ unsigned glyph_not_available_p : 1; /* 1 means the next line in display_line continues a character consisting of more than one glyph, and some glyphs of this character have been put on the previous line. */ unsigned starts_in_middle_of_char_p : 1; /* If 1, saved_face_id contains the id of the face in front of text skipped due to selective display. */ unsigned face_before_selective_p : 1; /* If 1, adjust current glyph so it does not increase current row descent/ascent (line-height property). Reset after this glyph. */ unsigned constrain_row_ascent_descent_p : 1; /* The ID of the default face to use. One of DEFAULT_FACE_ID, MODE_LINE_FACE_ID, etc, depending on what we are displaying. */ int base_face_id; /* If what == IT_CHARACTER, character and length in bytes. This is a character from a buffer or string. It may be different from the character displayed in case that unibyte_display_via_language_environment is set. If what == IT_COMPOSITION, the first component of a composition and length in bytes of the composition. */ int c, len; /* If what == IT_COMPOSITION, identification number and length in chars of a composition. */ int cmp_id, cmp_len; /* The character to display, possibly translated to multibyte if unibyte_display_via_language_environment is set. This is set after produce_glyphs has been called. */ int char_to_display; /* If what == IT_IMAGE, the id of the image to display. */ int image_id; /* Values from `slice' property. */ struct it_slice slice; /* Value of the `space-width' property, if any; nil if none. */ Lisp_Object space_width; /* Computed from the value of the `raise' property. */ short voffset; /* Value of the `height' property, if any; nil if none. */ Lisp_Object font_height; /* Object and position where the current display element came from. Object can be a Lisp string in case the current display element comes from an overlay string, or it is buffer. It may also be nil during mode-line update. Position is a position in object. */ Lisp_Object object; struct text_pos position; /* 1 means lines are truncated. */ unsigned truncate_lines_p : 1; /* Number of columns per \t. */ short tab_width; /* Width in pixels of truncation and continuation glyphs. */ short truncation_pixel_width, continuation_pixel_width; /* First and last visible x-position in the display area. If window is hscrolled by n columns, first_visible_x == n * FRAME_COLUMN_WIDTH (f), and last_visible_x == pixel width of W + first_visible_x. */ int first_visible_x, last_visible_x; /* Last visible y-position + 1 in the display area without a mode line, if the window has one. */ int last_visible_y; /* Default amount of additional space in pixels between lines (for window systems only.) */ int extra_line_spacing; /* Max extra line spacing added in this row. */ int max_extra_line_spacing; /* Override font height information for this glyph. Used if override_ascent >= 0. Cleared after this glyph. */ int override_ascent, override_descent, override_boff; /* If non-null, glyphs are produced in glyph_row with each call to produce_glyphs. */ struct glyph_row *glyph_row; /* The area of glyph_row to which glyphs are added. */ enum glyph_row_area area; /* Number of glyphs needed for the last character requested via produce_glyphs. This is 1 except for tabs. */ int nglyphs; /* Width of the display element in pixels. Result of produce_glyphs. */ int pixel_width; /* Current, maximum logical, and maximum physical line height information. Result of produce_glyphs. */ int ascent, descent, max_ascent, max_descent; int phys_ascent, phys_descent, max_phys_ascent, max_phys_descent; /* Current x pixel position within the display line. This value does not include the width of continuation lines in front of the line. The value of current_x is automatically incremented by pixel_width with each call to produce_glyphs. */ int current_x; /* Accumulated width of continuation lines. If > 0, this means we are currently in a continuation line. This is initially zero and incremented/reset by display_line, move_it_to etc. */ int continuation_lines_width; /* Current y-position. Automatically incremented by the height of glyph_row in move_it_to and display_line. */ int current_y; /* Vertical matrix position of first text line in window. */ int first_vpos; /* Current vertical matrix position, or line number. Automatically incremented by move_it_to and display_line. */ int vpos; /* Horizontal matrix position reached in move_it_in_display_line. Only set there, not in display_line. */ int hpos; /* Left fringe bitmap number (enum fringe_bitmap_type). */ unsigned left_user_fringe_bitmap : FRINGE_ID_BITS; /* Right fringe bitmap number (enum fringe_bitmap_type). */ unsigned right_user_fringe_bitmap : FRINGE_ID_BITS; /* Face of the left fringe glyph. */ unsigned left_user_fringe_face_id : FACE_ID_BITS; /* Face of the right fringe glyph. */ unsigned right_user_fringe_face_id : FACE_ID_BITS; }; /* Access to positions of iterator IT. */ #define IT_CHARPOS(IT) CHARPOS ((IT).current.pos) #define IT_BYTEPOS(IT) BYTEPOS ((IT).current.pos) #define IT_STRING_CHARPOS(IT) CHARPOS ((IT).current.string_pos) #define IT_STRING_BYTEPOS(IT) BYTEPOS ((IT).current.string_pos) /* Test if IT has reached the end of its buffer or string. This will only work after get_next_display_element has been called. */ #define ITERATOR_AT_END_P(IT) ((IT)->what == IT_EOB) /* Non-zero means IT is at the end of a line. This is the case if it is either on a newline or on a carriage return and selective display hides the rest of the line. */ #define ITERATOR_AT_END_OF_LINE_P(IT) \ ((IT)->what == IT_CHARACTER \ && ((IT)->c == '\n' \ || ((IT)->c == '\r' && (IT)->selective))) /* Call produce_glyphs or produce_glyphs_hook, if set. Shortcut to avoid the function call overhead. */ #define PRODUCE_GLYPHS(IT) \ do { \ extern int inhibit_free_realized_faces; \ if (rif != NULL) \ rif->produce_glyphs ((IT)); \ else \ produce_glyphs ((IT)); \ if ((IT)->glyph_row != NULL) \ inhibit_free_realized_faces = 1; \ } while (0) /* Bit-flags indicating what operation move_it_to should perform. */ enum move_operation_enum { /* Stop if specified x-position is reached. */ MOVE_TO_X = 0x01, /* Stop if specified y-position is reached. */ MOVE_TO_Y = 0x02, /* Stop if specified vpos is reached. */ MOVE_TO_VPOS = 0x04, /* Stop if specified buffer or string position is reached. */ MOVE_TO_POS = 0x08 }; /*********************************************************************** Window-based redisplay interface ***********************************************************************/ /* Structure used to describe runs of lines that must be scrolled. */ struct run { /* Source and destination y pixel position. */ int desired_y, current_y; /* Source and destination vpos in matrix. */ int desired_vpos, current_vpos; /* Height in pixels, number of glyph rows. */ int height, nrows; }; /* Handlers for setting frame parameters. */ typedef void (*frame_parm_handler) P_ ((struct frame *, Lisp_Object, Lisp_Object)); /* Structure holding system-dependent interface functions needed for window-based redisplay. */ struct redisplay_interface { /* Handlers for setting frame parameters. */ frame_parm_handler *frame_parm_handlers; /* Produce glyphs/get display metrics for the display element IT is loaded with. */ void (*produce_glyphs) P_ ((struct it *it)); /* Write or insert LEN glyphs from STRING at the nominal output position. */ void (*write_glyphs) P_ ((struct glyph *string, int len)); void (*insert_glyphs) P_ ((struct glyph *start, int len)); /* Clear from nominal output position to X. X < 0 means clear to right end of display. */ void (*clear_end_of_line) P_ ((int x)); /* Function to call to scroll the display as described by RUN on window W. */ void (*scroll_run_hook) P_ ((struct window *w, struct run *run)); /* Function to call after a line in a display has been completely updated. Used to draw truncation marks and alike. DESIRED_ROW is the desired row which has been updated. */ void (*after_update_window_line_hook) P_ ((struct glyph_row *desired_row)); /* Function to call before beginning to update window W in window-based redisplay. */ void (*update_window_begin_hook) P_ ((struct window *w)); /* Function to call after window W has been updated in window-based redisplay. CURSOR_ON_P non-zero means switch cursor on. MOUSE_FACE_OVERWRITTEN_P non-zero means that some lines in W that contained glyphs in mouse-face were overwritten, so we have to update the mouse highlight. */ void (*update_window_end_hook) P_ ((struct window *w, int cursor_on_p, int mouse_face_overwritten_p)); /* Move cursor to row/column position VPOS/HPOS, pixel coordinates Y/X. HPOS/VPOS are window-relative row and column numbers and X/Y are window-relative pixel positions. */ void (*cursor_to) P_ ((int vpos, int hpos, int y, int x)); /* Flush the display of frame F. For X, this is XFlush. */ void (*flush_display) P_ ((struct frame *f)); /* Flush the display of frame F if non-NULL. This is called during redisplay, and should be NULL on systems which flushes automatically before reading input. */ void (*flush_display_optional) P_ ((struct frame *f)); /* Clear the mouse hightlight in window W, if there is any. */ void (*clear_window_mouse_face) P_ ((struct window *w)); /* Set *LEFT and *RIGHT to the left and right overhang of GLYPH on frame F. */ void (*get_glyph_overhangs) P_ ((struct glyph *glyph, struct frame *f, int *left, int *right)); /* Fix the display of AREA of ROW in window W for overlapping rows. This function is called from redraw_overlapping_rows after desired rows have been made current. */ void (*fix_overlapping_area) P_ ((struct window *w, struct glyph_row *row, enum glyph_row_area area, int)); #ifdef HAVE_WINDOW_SYSTEM /* Draw a fringe bitmap in window W of row ROW using parameters P. */ void (*draw_fringe_bitmap) P_ ((struct window *w, struct glyph_row *row, struct draw_fringe_bitmap_params *p)); /* Define and destroy fringe bitmap no. WHICH. */ void (*define_fringe_bitmap) P_ ((int which, unsigned short *bits, int h, int wd)); void (*destroy_fringe_bitmap) P_ ((int which)); /* Get metrics of character CHAR2B in FONT of type FONT_TYPE. Value is null if CHAR2B is not contained in the font. */ XCharStruct * (*per_char_metric) P_ ((XFontStruct *font, XChar2b *char2b, int font_type)); /* Encode CHAR2B using encoding information from FONT_INFO. CHAR2B is the two-byte form of C. Encoding is returned in *CHAR2B. If TWO_BYTE_P is non-null, return non-zero there if font is two-byte. */ int (*encode_char) P_ ((int c, XChar2b *char2b, struct font_info *font_into, int *two_byte_p)); /* Compute left and right overhang of glyph string S. A NULL pointer if platform does not support this. */ void (*compute_glyph_string_overhangs) P_ ((struct glyph_string *s)); /* Draw a glyph string S. */ void (*draw_glyph_string) P_ ((struct glyph_string *s)); /* Define cursor CURSOR on frame F. */ void (*define_frame_cursor) P_ ((struct frame *f, Cursor cursor)); /* Clear the area at (X,Y,WIDTH,HEIGHT) of frame F. */ void (*clear_frame_area) P_ ((struct frame *f, int x, int y, int width, int height)); /* Draw specified cursor CURSOR_TYPE of width CURSOR_WIDTH at row GLYPH_ROW on window W if ON_P is 1. If ON_P is 0, don't draw cursor. If ACTIVE_P is 1, system caret should track this cursor (when applicable). */ void (*draw_window_cursor) P_ ((struct window *w, struct glyph_row *glyph_row, int x, int y, int cursor_type, int cursor_width, int on_p, int active_p)); /* Draw vertical border for window W from (X,Y0) to (X,Y1). */ void (*draw_vertical_window_border) P_ ((struct window *w, int x, int y0, int y1)); /* Shift display of frame F to make room for inserted glyphs. The area at pixel (X,Y) of width WIDTH and height HEIGHT is shifted right by SHIFT_BY pixels. */ void (*shift_glyphs_for_insert) P_ ((struct frame *f, int x, int y, int width, int height, int shift_by)); #endif /* HAVE_WINDOW_SYSTEM */ }; /* The current interface for window-based redisplay. */ extern struct redisplay_interface *rif; /*********************************************************************** Images ***********************************************************************/ #ifdef HAVE_WINDOW_SYSTEM /* Structure forward declarations. */ struct image; /* Each image format (JPEG, TIFF, ...) supported is described by a structure of the type below. */ struct image_type { /* A symbol uniquely identifying the image type, .e.g `jpeg'. */ Lisp_Object *type; /* Check that SPEC is a valid image specification for the given image type. Value is non-zero if SPEC is valid. */ int (* valid_p) P_ ((Lisp_Object spec)); /* Load IMG which is used on frame F from information contained in IMG->spec. Value is non-zero if successful. */ int (* load) P_ ((struct frame *f, struct image *img)); /* Free resources of image IMG which is used on frame F. */ void (* free) P_ ((struct frame *f, struct image *img)); /* Next in list of all supported image types. */ struct image_type *next; }; /* Structure describing an image. Specific image formats like XBM are converted into this form, so that display only has to deal with this type of image. */ struct image { /* The time in seconds at which the image was last displayed. Set in prepare_image_for_display. */ unsigned long timestamp; /* Pixmaps of the image. */ Pixmap pixmap, mask; /* Colors allocated for this image, if any. Allocated via xmalloc. */ unsigned long *colors; int ncolors; /* A single `background color' for this image, for the use of anyone that cares about such a thing. Only valid if the `background_valid' field is true. This should generally be accessed by calling the accessor macro `IMAGE_BACKGROUND', which will heuristically calculate a value if necessary. */ unsigned long background; /* True if this image has a `transparent' background -- that is, is uses an image mask. The accessor macro for this is `IMAGE_BACKGROUND_TRANSPARENT'. */ unsigned background_transparent : 1; /* True if the `background' and `background_transparent' fields are valid, respectively. */ unsigned background_valid : 1, background_transparent_valid : 1; /* Width and height of the image. */ int width, height; /* These values are used for the rectangles displayed for images that can't be loaded. */ #define DEFAULT_IMAGE_WIDTH 30 #define DEFAULT_IMAGE_HEIGHT 30 /* Top/left and bottom/right corner pixel of actual image data. Used by four_corners_best to consider the real image data, rather than looking at the optional image margin. */ int corners[4]; #define TOP_CORNER 0 #define LEFT_CORNER 1 #define BOT_CORNER 2 #define RIGHT_CORNER 3 /* Percent of image height used as ascent. A value of CENTERED_IMAGE_ASCENT means draw the image centered on the line. */ int ascent; #define DEFAULT_IMAGE_ASCENT 50 #define CENTERED_IMAGE_ASCENT -1 /* Lisp specification of this image. */ Lisp_Object spec; /* Relief to draw around the image. */ int relief; /* Optional margins around the image. This includes the relief. */ int hmargin, vmargin; /* Reference to the type of the image. */ struct image_type *type; /* 1 means that loading the image failed. Don't try again. */ unsigned load_failed_p; /* A place for image types to store additional data. The member data.lisp_val is marked during GC, so it's safe to store Lisp data there. Image types should free this data when their `free' function is called. */ struct { int int_val; void *ptr_val; Lisp_Object lisp_val; } data; /* Hash value of image specification to speed up comparisons. */ unsigned hash; /* Image id of this image. */ int id; /* Hash collision chain. */ struct image *next, *prev; }; /* Cache of images. Each frame has a cache. X frames with the same x_display_info share their caches. */ struct image_cache { /* Hash table of images. */ struct image **buckets; /* Vector mapping image ids to images. */ struct image **images; /* Allocated size of `images'. */ unsigned size; /* Number of images in the cache. */ unsigned used; /* Reference count (number of frames sharing this cache). */ int refcount; }; /* Value is a pointer to the image with id ID on frame F, or null if no image with that id exists. */ #define IMAGE_FROM_ID(F, ID) \ (((ID) >= 0 && (ID) < (FRAME_X_IMAGE_CACHE (F)->used)) \ ? FRAME_X_IMAGE_CACHE (F)->images[ID] \ : NULL) /* Size of bucket vector of image caches. Should be prime. */ #define IMAGE_CACHE_BUCKETS_SIZE 1001 #endif /* HAVE_WINDOW_SYSTEM */ /*********************************************************************** Tool-bars ***********************************************************************/ /* Enumeration defining where to find tool-bar item information in tool-bar items vectors stored with frames. Each tool-bar item occupies TOOL_BAR_ITEM_NSLOTS elements in such a vector. */ enum tool_bar_item_idx { /* The key of the tool-bar item. Used to remove items when a binding for `undefined' is found. */ TOOL_BAR_ITEM_KEY, /* Non-nil if item is enabled. */ TOOL_BAR_ITEM_ENABLED_P, /* Non-nil if item is selected (pressed). */ TOOL_BAR_ITEM_SELECTED_P, /* Caption. */ TOOL_BAR_ITEM_CAPTION, /* Image(s) to display. This is either a single image specification or a vector of specifications. */ TOOL_BAR_ITEM_IMAGES, /* The binding. */ TOOL_BAR_ITEM_BINDING, /* Button type. One of nil, `:radio' or `:toggle'. */ TOOL_BAR_ITEM_TYPE, /* Help string. */ TOOL_BAR_ITEM_HELP, /* Sentinel = number of slots in tool_bar_items occupied by one tool-bar item. */ TOOL_BAR_ITEM_NSLOTS }; /* An enumeration for the different images that can be specified for a tool-bar item. */ enum tool_bar_item_image { TOOL_BAR_IMAGE_ENABLED_SELECTED, TOOL_BAR_IMAGE_ENABLED_DESELECTED, TOOL_BAR_IMAGE_DISABLED_SELECTED, TOOL_BAR_IMAGE_DISABLED_DESELECTED }; /* Margin around tool-bar buttons in pixels. */ extern Lisp_Object Vtool_bar_button_margin; /* Thickness of relief to draw around tool-bar buttons. */ extern EMACS_INT tool_bar_button_relief; /* Default values of the above variables. */ #define DEFAULT_TOOL_BAR_BUTTON_MARGIN 4 #define DEFAULT_TOOL_BAR_BUTTON_RELIEF 1 /* The height in pixels of the default tool-bar images. */ #define DEFAULT_TOOL_BAR_IMAGE_HEIGHT 24 /*********************************************************************** Terminal Capabilities ***********************************************************************/ /* Each of these is a bit representing a terminal `capability' (bold, inverse, etc). They are or'd together to specify the set of capabilities being queried for when calling `tty_capable_p' (which returns true if the terminal supports all of them). */ #define TTY_CAP_INVERSE 0x01 #define TTY_CAP_UNDERLINE 0x02 #define TTY_CAP_BOLD 0x04 #define TTY_CAP_DIM 0x08 #define TTY_CAP_BLINK 0x10 #define TTY_CAP_ALT_CHARSET 0x20 /*********************************************************************** Function Prototypes ***********************************************************************/ /* Defined in xdisp.c */ struct glyph_row *row_containing_pos P_ ((struct window *, int, struct glyph_row *, struct glyph_row *, int)); int string_buffer_position P_ ((struct window *, Lisp_Object, int)); int line_bottom_y P_ ((struct it *)); int display_prop_intangible_p P_ ((Lisp_Object)); void resize_echo_area_exactly P_ ((void)); int resize_mini_window P_ ((struct window *, int)); int try_window P_ ((Lisp_Object, struct text_pos, int)); void window_box P_ ((struct window *, int, int *, int *, int *, int *)); int window_box_height P_ ((struct window *)); int window_text_bottom_y P_ ((struct window *)); int window_box_width P_ ((struct window *, int)); int window_box_left P_ ((struct window *, int)); int window_box_left_offset P_ ((struct window *, int)); int window_box_right P_ ((struct window *, int)); int window_box_right_offset P_ ((struct window *, int)); void window_box_edges P_ ((struct window *, int, int *, int *, int *, int *)); int estimate_mode_line_height P_ ((struct frame *, enum face_id)); void pixel_to_glyph_coords P_ ((struct frame *, int, int, int *, int *, NativeRectangle *, int)); int glyph_to_pixel_coords P_ ((struct window *, int, int, int *, int *)); void remember_mouse_glyph P_ ((struct frame *, int, int, NativeRectangle *)); void mark_window_display_accurate P_ ((Lisp_Object, int)); void redisplay_preserve_echo_area P_ ((int)); int set_cursor_from_row P_ ((struct window *, struct glyph_row *, struct glyph_matrix *, int, int, int, int)); void init_iterator P_ ((struct it *, struct window *, int, int, struct glyph_row *, enum face_id)); void init_iterator_to_row_start P_ ((struct it *, struct window *, struct glyph_row *)); int get_next_display_element P_ ((struct it *)); void set_iterator_to_next P_ ((struct it *, int)); void produce_glyphs P_ ((struct it *)); void produce_special_glyphs P_ ((struct it *, enum display_element_type)); void start_display P_ ((struct it *, struct window *, struct text_pos)); void move_it_to P_ ((struct it *, int, int, int, int, int)); void move_it_vertically P_ ((struct it *, int)); void move_it_vertically_backward P_ ((struct it *, int)); void move_it_by_lines P_ ((struct it *, int, int)); void move_it_past_eol P_ ((struct it *)); int in_display_vector_p P_ ((struct it *)); int frame_mode_line_height P_ ((struct frame *)); void highlight_trailing_whitespace P_ ((struct frame *, struct glyph_row *)); extern Lisp_Object Qtool_bar; extern Lisp_Object Vshow_trailing_whitespace; extern int mode_line_in_non_selected_windows; extern int redisplaying_p; extern void add_to_log P_ ((char *, Lisp_Object, Lisp_Object)); extern int help_echo_showing_p; extern int current_mode_line_height, current_header_line_height; extern Lisp_Object help_echo_string, help_echo_window; extern Lisp_Object help_echo_object, previous_help_echo_string; extern int help_echo_pos; extern struct frame *last_mouse_frame; extern int last_tool_bar_item; extern Lisp_Object Vmouse_autoselect_window; extern int unibyte_display_via_language_environment; extern void reseat_at_previous_visible_line_start P_ ((struct it *)); extern int calc_pixel_width_or_height P_ ((double *, struct it *, Lisp_Object, /* XFontStruct */ void *, int, int *)); #ifdef HAVE_WINDOW_SYSTEM #if GLYPH_DEBUG extern void dump_glyph_string P_ ((struct glyph_string *)); #endif extern void x_get_glyph_overhangs P_ ((struct glyph *, struct frame *, int *, int *)); extern void x_produce_glyphs P_ ((struct it *)); extern void x_write_glyphs P_ ((struct glyph *, int)); extern void x_insert_glyphs P_ ((struct glyph *, int len)); extern void x_clear_end_of_line P_ ((int)); extern int x_stretch_cursor_p; extern struct cursor_pos output_cursor; extern void x_fix_overlapping_area P_ ((struct window *, struct glyph_row *, enum glyph_row_area, int)); extern void draw_phys_cursor_glyph P_ ((struct window *, struct glyph_row *, enum draw_glyphs_face)); extern void get_phys_cursor_geometry P_ ((struct window *, struct glyph_row *, struct glyph *, int *, int *, int *)); extern void erase_phys_cursor P_ ((struct window *)); extern void display_and_set_cursor P_ ((struct window *, int, int, int, int, int)); extern void set_output_cursor P_ ((struct cursor_pos *)); extern void x_cursor_to P_ ((int, int, int, int)); extern void x_update_cursor P_ ((struct frame *, int)); extern void x_clear_cursor P_ ((struct window *)); extern void x_draw_vertical_border P_ ((struct window *w)); extern void frame_to_window_pixel_xy P_ ((struct window *, int *, int *)); extern int get_glyph_string_clip_rects P_ ((struct glyph_string *, NativeRectangle *, int)); extern void get_glyph_string_clip_rect P_ ((struct glyph_string *, NativeRectangle *nr)); extern Lisp_Object find_hot_spot P_ ((Lisp_Object, int, int)); extern void note_mouse_highlight P_ ((struct frame *, int, int)); extern void x_clear_window_mouse_face P_ ((struct window *)); extern void cancel_mouse_face P_ ((struct frame *)); extern void handle_tool_bar_click P_ ((struct frame *, int, int, int, unsigned int)); /* msdos.c defines its own versions of these functions. */ extern int clear_mouse_face P_ ((Display_Info *)); extern void show_mouse_face P_ ((Display_Info *, enum draw_glyphs_face)); extern int cursor_in_mouse_face_p P_ ((struct window *w)); extern void expose_frame P_ ((struct frame *, int, int, int, int)); extern int x_intersect_rectangles P_ ((XRectangle *, XRectangle *, XRectangle *)); #endif /* Defined in fringe.c */ int lookup_fringe_bitmap (Lisp_Object); void draw_fringe_bitmap P_ ((struct window *, struct glyph_row *, int)); void draw_row_fringe_bitmaps P_ ((struct window *, struct glyph_row *)); int draw_window_fringes P_ ((struct window *, int)); int update_window_fringes P_ ((struct window *, int)); void compute_fringe_widths P_ ((struct frame *, int)); #ifdef WINDOWS_NT void w32_init_fringe P_ ((void)); void w32_reset_fringes P_ ((void)); #endif #ifdef MAC_OS void mac_init_fringe P_ ((void)); #endif /* Defined in image.c */ #ifdef HAVE_WINDOW_SYSTEM extern int x_bitmap_height P_ ((struct frame *, int)); extern int x_bitmap_width P_ ((struct frame *, int)); extern int x_bitmap_pixmap P_ ((struct frame *, int)); extern void x_reference_bitmap P_ ((struct frame *, int)); extern int x_create_bitmap_from_data P_ ((struct frame *, char *, unsigned int, unsigned int)); extern int x_create_bitmap_from_file P_ ((struct frame *, Lisp_Object)); #if defined (HAVE_XPM) && defined (HAVE_X_WINDOWS) extern int x_create_bitmap_from_xpm_data P_ ((struct frame *f, char **bits)); #endif #ifndef x_destroy_bitmap extern void x_destroy_bitmap P_ ((struct frame *, int)); #endif extern void x_destroy_all_bitmaps P_ ((Display_Info *)); extern int x_create_bitmap_mask P_ ((struct frame * , int)); extern Lisp_Object x_find_image_file P_ ((Lisp_Object)); void x_kill_gs_process P_ ((Pixmap, struct frame *)); struct image_cache *make_image_cache P_ ((void)); void free_image_cache P_ ((struct frame *)); void clear_image_cache P_ ((struct frame *, int)); void forall_images_in_image_cache P_ ((struct frame *, void (*) P_ ((struct image *)))); int valid_image_p P_ ((Lisp_Object)); void prepare_image_for_display P_ ((struct frame *, struct image *)); int lookup_image P_ ((struct frame *, Lisp_Object)); unsigned long image_background P_ ((struct image *, struct frame *, XImagePtr_or_DC ximg)); int image_background_transparent P_ ((struct image *, struct frame *, XImagePtr_or_DC mask)); int image_ascent P_ ((struct image *, struct face *, struct glyph_slice *)); #endif /* Defined in sysdep.c */ void get_frame_size P_ ((int *, int *)); void request_sigio P_ ((void)); void unrequest_sigio P_ ((void)); int tabs_safe_p P_ ((void)); void init_baud_rate P_ ((void)); void init_sigio P_ ((int)); /* Defined in xfaces.c */ #ifdef HAVE_X_WINDOWS void x_free_colors P_ ((struct frame *, unsigned long *, int)); #endif void update_face_from_frame_parameter P_ ((struct frame *, Lisp_Object, Lisp_Object)); Lisp_Object tty_color_name P_ ((struct frame *, int)); void clear_face_cache P_ ((int)); unsigned long load_color P_ ((struct frame *, struct face *, Lisp_Object, enum lface_attribute_index)); void unload_color P_ ((struct frame *, unsigned long)); int face_font_available_p P_ ((struct frame *, Lisp_Object)); int ascii_face_of_lisp_face P_ ((struct frame *, int)); void prepare_face_for_display P_ ((struct frame *, struct face *)); int xstricmp P_ ((const unsigned char *, const unsigned char *)); int lookup_face P_ ((struct frame *, Lisp_Object *, int, struct face *)); int lookup_named_face P_ ((struct frame *, Lisp_Object, int, int)); int smaller_face P_ ((struct frame *, int, int)); int face_with_height P_ ((struct frame *, int, int)); int lookup_derived_face P_ ((struct frame *, Lisp_Object, int, int, int)); void init_frame_faces P_ ((struct frame *)); void free_frame_faces P_ ((struct frame *)); void recompute_basic_faces P_ ((struct frame *)); int face_at_buffer_position P_ ((struct window *, int, int, int, int *, int, int)); int face_at_string_position P_ ((struct window *, Lisp_Object, int, int, int, int, int *, enum face_id, int)); int merge_faces P_ ((struct frame *, Lisp_Object, int, int)); int compute_char_face P_ ((struct frame *, int, Lisp_Object)); void free_all_realized_faces P_ ((Lisp_Object)); extern Lisp_Object Qforeground_color, Qbackground_color; extern Lisp_Object Qframe_set_background_mode; extern char unspecified_fg[], unspecified_bg[]; void free_realized_multibyte_face P_ ((struct frame *, int)); /* Defined in xfns.c */ #ifdef HAVE_X_WINDOWS void gamma_correct P_ ((struct frame *, XColor *)); #endif #ifdef WINDOWSNT void gamma_correct P_ ((struct frame *, COLORREF *)); #endif #ifdef MAC_OS void gamma_correct P_ ((struct frame *, unsigned long *)); #endif #ifdef HAVE_WINDOW_SYSTEM int x_screen_planes P_ ((struct frame *)); void x_implicitly_set_name P_ ((struct frame *, Lisp_Object, Lisp_Object)); extern Lisp_Object tip_frame; extern Window tip_window; EXFUN (Fx_show_tip, 6); EXFUN (Fx_hide_tip, 0); extern void start_hourglass P_ ((void)); extern void cancel_hourglass P_ ((void)); extern int hourglass_started P_ ((void)); extern int display_hourglass_p; /* Returns the background color of IMG, calculating one heuristically if necessary. If non-zero, XIMG is an existing XImage object to use for the heuristic. */ #define IMAGE_BACKGROUND(img, f, ximg) \ ((img)->background_valid \ ? (img)->background \ : image_background (img, f, ximg)) /* Returns true if IMG has a `transparent' background, using heuristics to decide if necessary. If non-zero, MASK is an existing XImage object to use for the heuristic. */ #define IMAGE_BACKGROUND_TRANSPARENT(img, f, mask) \ ((img)->background_transparent_valid \ ? (img)->background_transparent \ : image_background_transparent (img, f, mask)) #endif /* HAVE_WINDOW_SYSTEM */ /* Defined in xmenu.c */ int popup_activated P_ ((void)); /* Defined in dispnew.c */ extern int inverse_video; extern int required_matrix_width P_ ((struct window *)); extern int required_matrix_height P_ ((struct window *)); extern Lisp_Object buffer_posn_from_coords P_ ((struct window *, int *, int *, struct display_pos *, Lisp_Object *, int *, int *, int *, int *)); extern Lisp_Object mode_line_string P_ ((struct window *, enum window_part, int *, int *, int *, Lisp_Object *, int *, int *, int *, int *)); extern Lisp_Object marginal_area_string P_ ((struct window *, enum window_part, int *, int *, int *, Lisp_Object *, int *, int *, int *, int *)); extern void redraw_frame P_ ((struct frame *)); extern void redraw_garbaged_frames P_ ((void)); extern void cancel_line P_ ((int, struct frame *)); extern void init_desired_glyphs P_ ((struct frame *)); extern int scroll_frame_lines P_ ((struct frame *, int, int, int, int)); extern int direct_output_for_insert P_ ((int)); extern int direct_output_forward_char P_ ((int)); extern int update_frame P_ ((struct frame *, int, int)); extern int scrolling P_ ((struct frame *)); extern void bitch_at_user P_ ((void)); void adjust_glyphs P_ ((struct frame *)); void free_glyphs P_ ((struct frame *)); void free_window_matrices P_ ((struct window *)); void check_glyph_memory P_ ((void)); void mirrored_line_dance P_ ((struct glyph_matrix *, int, int, int *, char *)); void clear_glyph_matrix P_ ((struct glyph_matrix *)); void clear_current_matrices P_ ((struct frame *f)); void clear_desired_matrices P_ ((struct frame *)); void shift_glyph_matrix P_ ((struct window *, struct glyph_matrix *, int, int, int)); void rotate_matrix P_ ((struct glyph_matrix *, int, int, int)); void increment_matrix_positions P_ ((struct glyph_matrix *, int, int, int, int)); void blank_row P_ ((struct window *, struct glyph_row *, int)); void increment_row_positions P_ ((struct glyph_row *, int, int)); void enable_glyph_matrix_rows P_ ((struct glyph_matrix *, int, int, int)); void clear_glyph_row P_ ((struct glyph_row *)); void prepare_desired_row P_ ((struct glyph_row *)); int line_hash_code P_ ((struct glyph_row *)); void set_window_update_flags P_ ((struct window *, int)); void write_glyphs P_ ((struct glyph *, int)); void insert_glyphs P_ ((struct glyph *, int)); void redraw_frame P_ ((struct frame *)); void redraw_garbaged_frames P_ ((void)); int scroll_cost P_ ((struct frame *, int, int, int)); int direct_output_for_insert P_ ((int)); int direct_output_forward_char P_ ((int)); int update_frame P_ ((struct frame *, int, int)); void update_single_window P_ ((struct window *, int)); int scrolling P_ ((struct frame *)); void do_pending_window_change P_ ((int)); void change_frame_size P_ ((struct frame *, int, int, int, int, int)); void bitch_at_user P_ ((void)); void init_display P_ ((void)); void syms_of_display P_ ((void)); extern Lisp_Object Qredisplay_dont_pause; GLYPH spec_glyph_lookup_face P_ ((struct window *, GLYPH)); /* Defined in term.c */ extern void ring_bell P_ ((void)); extern void set_terminal_modes P_ ((void)); extern void reset_terminal_modes P_ ((void)); extern void update_begin P_ ((struct frame *)); extern void update_end P_ ((struct frame *)); extern void set_terminal_window P_ ((int)); extern void set_scroll_region P_ ((int, int)); extern void turn_off_insert P_ ((void)); extern void turn_off_highlight P_ ((void)); extern void background_highlight P_ ((void)); extern void clear_frame P_ ((void)); extern void clear_end_of_line P_ ((int)); extern void clear_end_of_line_raw P_ ((int)); extern void delete_glyphs P_ ((int)); extern void ins_del_lines P_ ((int, int)); extern int string_cost P_ ((char *)); extern int per_line_cost P_ ((char *)); extern void calculate_costs P_ ((struct frame *)); extern void set_tty_color_mode P_ ((struct frame *, Lisp_Object)); extern void tty_setup_colors P_ ((int)); extern void term_init P_ ((char *)); void cursor_to P_ ((int, int)); extern int tty_capable_p P_ ((struct frame *, unsigned, unsigned long, unsigned long)); /* Defined in scroll.c */ extern int scrolling_max_lines_saved P_ ((int, int, int *, int *, int *)); extern int scroll_cost P_ ((struct frame *, int, int, int)); extern void do_line_insertion_deletion_costs P_ ((struct frame *, char *, char *, char *, char *, char *, char *, int)); void scrolling_1 P_ ((struct frame *, int, int, int, int *, int *, int *, int *, int)); /* Defined in frame.c */ #ifdef HAVE_WINDOW_SYSTEM /* Types we might convert a resource string into. */ enum resource_types { RES_TYPE_NUMBER, RES_TYPE_FLOAT, RES_TYPE_BOOLEAN, RES_TYPE_STRING, RES_TYPE_SYMBOL }; extern Lisp_Object x_get_arg P_ ((Display_Info *, Lisp_Object, Lisp_Object, char *, char *class, enum resource_types)); extern Lisp_Object x_frame_get_arg P_ ((struct frame *, Lisp_Object, Lisp_Object, char *, char *, enum resource_types)); extern Lisp_Object x_frame_get_and_record_arg P_ (( struct frame *, Lisp_Object, Lisp_Object, char *, char *, enum resource_types)); extern Lisp_Object x_default_parameter P_ ((struct frame *, Lisp_Object, Lisp_Object, Lisp_Object, char *, char *, enum resource_types)); #endif /* HAVE_WINDOW_SYSTEM */ #endif /* not DISPEXTERN_H_INCLUDED */ /* arch-tag: c65c475f-1c1e-4534-8795-990b8509fd65 (do not change this comment) */ | http://opensource.apple.com//source/emacs/emacs-88.1/emacs/src/dispextern.h | CC-MAIN-2016-36 | refinedweb | 12,409 | 56.35 |
JSON and validation¶
Introduction¶
Morepath lets you define a JSON representations for arbitrary Python objects. When you return such an object from a json view, the object is automatically converted to JSON.
When JSON comes in as the POST or PUT body of the request, you can define how it is to be converted to a Python object and how it is to be validated.
This feature lets you plug in external (de)serialization libraries, such as Marshmallow. We’ve provided Marshmallow integration for Morepath in more.marshmallow function for views¶
When you specify the
load function in a view directive you can
specify how to turn the request body for a POST or PUT method into
a Python object for that view. This Python object comes in as the
third argument to your view function:
def my_load(request): return request.json @App.json(model=Item, request_method='POST', load=my_load) def item_post(self, request, obj): # the third obj argument contains the result of my_load(request)
The
load function takes the request and must return some Python object (such
as a simple
dict). If the data supplied in the request body is incorrect and
cannot be converted into a Python object then you should raise an exception.
This can be a webob exception (we suggest
webob.exc.HTTPUnprocessableEntity), but you could also define your own
custom exception and provide a view for it that sets the status to 422. This way
conversion and validation errors are reported to the end user. | http://morepath.readthedocs.io/en/latest/json.html | CC-MAIN-2017-22 | refinedweb | 250 | 51.48 |
The first case is if I use an odd number for the width in the ROI of the set_windowing method, there is some strange distortion. Secondly, with some even-numbered widths the image jumps around, for example if I use 20 pixels.
I have a video (with the newest version of the IDE!
Code: Select all
sensor.set_windowing((10, 0, 20, 120)).
Code: Select all
import sensor, image sensor.reset() sensor.set_pixformat(sensor.RGB565) sensor.set_framesize(sensor.QQVGA) sensor.skip_frames(time = 2000) sensor.snapshot() for i in range(1, sensor.width()): roi = (0, 0, i, sensor.height()) print(roi) sensor.set_windowing(roi) sensor.skip_frames(time = 1000) sensor.snapshot()
This:
produces files such as "snapshot2827052.jpg.bmp".
Code: Select all
img.save("snapshot" + str(pyb.millis()) + ".jpg", quality=50)
I'm not sure if my app to view the image is just looking at the file headers to display it correctly because I can open it with .bmp or without it. | http://forums.openmv.io/viewtopic.php?f=6&t=696&p=4407&sid=bfbf2c8f36a93d2a0452710ccf6447af | CC-MAIN-2018-43 | refinedweb | 160 | 62.64 |
Hi guys,
We have a product that runs in a web environment which should allow customers to translate the bundle entries at runtime and the changes to be picked up without re-deployment. We successfully achieve the goal of translating the resources through using ListResourceBundle implementation, but the problem is we can't offer the customers the ability to add a new language (new Locale). Currently we need the physical resource bundle java, in order for the users to start translating text at runtime.
Does anybody know any solution that will allow us to register new Bundles dynamically?
Thanks,
Florin
Storing translations in a database is a quite common approach.
You could extend ListResourceBundle to fetch translations from database on the fly...
bye
TPD
Thank you for your answer, as I was mentioning on my initial post, we are already using ListResourceBundle implementations.
The problem is that we have one ListResourceBundle file for each language, eg:
1. CustomBundle.java
2. CustomBundle_DE.java
3. CustomBundle_FR.java
So, when we need to add a new language, Spanish for example, we would need to create one more physical file.
My question: is it any way to handle all 3 languages from a single java file only?
Thanks,
Florin
Rather buried in the docs but the following method
public static final ResourceBundle getBundle(String baseName)
Documents the parameter as follows
baseName - the base name of the resource bundle, a fully qualified class name
You then nee a class that resolves any calls to the database. I suspect however that you are actually going to end up using the explicit classloader version of the class since it might attempt to add the language extension to the class name (just guessing) and if that is the case then a custom class loader could be used to return (maybe) a single class with some construction semantics or a proxy.
At any rate the final class would resolve to a database call. | https://community.oracle.com/thread/2600560 | CC-MAIN-2017-09 | refinedweb | 324 | 55.07 |
XmListAddItemsUnselected - Man Page
A List function that adds items to a list
Synopsis
#include <Xm/List.h> void XmListAddItemsUnselected( Widget widget, XmString *items, int item_count, int position);
Description
XmListAddItemsUnselected adds the specified items to the list at the given position. The inserted items remain unselected, even if they currently appear in the XmNselectedItems list.
- widget
Specifies the ID of the List widget to add items to.
- items
Specifies a pointer to the items to be added to the list.
- item_count
Specifies the number of elements in items. This number must be nonnegative.
- position
Specifies the position of the first new item in the list. A value of 1 makes the first new item the first item in the list; a value of 2 makes it the second item; and so on. A value of 0 (zero) makes the first new item follow the last item of the list.
For a complete definition of List and its associated resources, see XmList(3).
Related
XmList(3).
Referenced By
XmList(3). | https://www.mankier.com/3/XmListAddItemsUnselected | CC-MAIN-2021-21 | refinedweb | 168 | 57.67 |
I'm working on a random number guessing game with 5 tries that uses standard JOptionPane GUI and 3 methods, the requirements have each method doing a specific task.
The main method starts a game is supposed to handle the 'you won' or 'you lost' dialogue and play again if the user clicks ok and stop if they click cancel.
the playGame method plays a full game and returns a boolean value if the user wants to play another.
and the compareTo method compares the guess with the actual random answer and returns a 0 if true, potivie is guess was too, and negative is guess was too low.
I cannot see why I'm getting a compiler error from my joptionpane messages, the syntax seems correct. Would I need to make a global counter variable for won or lost option in the main method? either that or make a new boolean called win in the main? I've coded myself into maze. Any help is appreciated.
// Java Project 3 10/3/11 // Joe Owens // import java.util.*; import javax.swing.*; public class GuessGame { public static void main ( String[] args ) { /* This method is responsible for the "Play again?" logic, including the “you won/you lost” dialog. This means a loop that runs at least once, and continues until the player quits. */ boolean playAgain = false; boolean win = false; do { if (playGame(win)) JOptionPane.showMessageDialog("You win, play again?.", "Choose one", JOptionPane.OK_CANCEL_OPTION, JOptionPane.QUESTION_MESSAGE); else JOptionPane.showMessageDialog("You lost, play again?.", "Choose one", JOptionPane.OK_CANCEL_OPTION, JOptionPane.QUESTION_MESSAGE); } while (playAgain == true); } static boolean playGame (boolean win) { /*. */ int chance = 4; //chances total int guessCtr = 0; //guesses made String guessStr; Random rand = new Random(); int answer = rand.nextInt(9) + 1; //gives random number 0 to 9 and adds 1 to make 1-10. while ( guessCtr < chance ) { guessStr = JOptionPane.showInputDialog("Guess a number 1 to 10.", "" + chance + " chances left", JOptionPane.OK_CANCEL_OPTION, JOptionPane.QUESTION_MESSAGE); try{ int guess = Integer.parseInt(guessStr); } catch (Exception ex) { //error handling for invalid inputs such as letters JOptionPane.showInputDialog("Invalid input, please try again.", "" + chance + " chances left", JOptionPane.OK_CANCEL_OPTION, JOptionPane.ERROR_MESSAGE); } continue; if ( compareTo( guess, answer ) == 0){ JOptionPane.showMessageDialog("You are correct, well done!.", "Choose one", JOptionPane.OK_CANCEL_OPTION, JOptionPane.QUESTION_MESSAGE); return true; break; } else if ( compareTo( guess, answer ) < 0){ JOptionPane.showInputDialog("Your guess was too low.", "" + chance + " chances left", JOptionPane.OK_CANCEL_OPTION, JOptionPane.QUESTION_MESSAGE); break; } else if ( compareTo( guess, answer ) > 0){ //could make this else JOptionPane.showInputDialog("Your guess was too high.", "" + chance + " chances left", JOptionPane.OK_CANCEL_OPTION, JOptionPane.QUESTION_MESSAGE); break; } guessCtr++; } return false; } static int compareTo ( int compGuess, int compAnswer ) { /* This method compares the user input (a single guess) with the correct answer. It returns a negative integer if the guess is too low, a positive integer if the guess is too high, and 0 (zero) if the guess is correct */ //int result = Integer.compareTo( compGuess, compAnswer ); //int result = compGuess.compareTo( compAnswer ); //return result; //I was trying all 3 ways, built in compareTo both giving a compiler error if ( compGuess == compAnswer){ return 0; } else if ( compGuess < compAnswer){ return -1; } else if ( compGuess > compAnswer ){ return 1; } } } | https://www.daniweb.com/programming/software-development/threads/387673/method-calling-and-joptionpane-errors | CC-MAIN-2021-17 | refinedweb | 512 | 60.21 |
Opened 7 years ago
Closed 7 years ago
Last modified 7 years ago
#18685 closed Bug (duplicate)
Managements commands in multiple submodules in the same virtual package don't work
Description
If you have commands in two packages, mypackage.A and mypackage.B which are installed from separate egg-link files (develop mode of setuptools), only the commands in the first package will be available at the django command line.
I've attached an example which shows the problem. To try it out, do the following:
1) run "python setup.py develop" in project-A/ and project-B/
2) run "python manage.py command_A" and "python manage.py command_B" in project-C/
Expected: both commands work
Actual: only command_A works
Attachments (1)
Change History (8)
Changed 7 years ago by
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
comment:3 Changed 7 years ago by
comment:4 Changed 7 years ago by
Added a regression test. Let me know if there's anything else that's needed!
comment:5 Changed 7 years ago by
I've verified the attached bug demonstration - the fix looks good as pkgutil is more informed about package imports
however testing this is hard - I did some experiments with creating a test local site-packages-like folder with the .egg-link files with relative paths, but didn't work on my first try.
Current tests not working as implemented
link to patch above is stale as it was to a single commit - pull request is here
comment:6 Changed 7 years ago by
comment:7 Changed 7 years ago by
This is a duplicate of #14087.
for namespace package, you can try my patch:
Fixed this in my Github fork: git@…:cberner/django.git commit: be5eb957c2dfeea2ce64888359791d3554ce6607 | https://code.djangoproject.com/ticket/18685?cversion=0&cnum_hist=4 | CC-MAIN-2019-30 | refinedweb | 296 | 60.55 |
0
Hey all I have another riveting problem from my genius professor T_T. I have to make a program using functions, reference Parameters and full string words. Now if it were just a single letter at a time this would be easy. But it wants us to let the user enter in a whole word, and I have to have the program look at each letter and test it. It give me these operators to use string.str , string.length Kinda clueless on this sense she did her usual and didnt tell us how to do this stuff before hand. Here is my code:
#include <iostream> #include <string> using namespace std; void countVowels(string str, int& aCt, int& eCt, int& iCt, int& oCt, int& uCt,int& nV); int main() { string inputString; cout << "Please enter a word and we will tell you how many vowels are in it!" << endl; getline(cin,inputString); countVowels(inputString); } void countVowels (string s, int& aCont, int& eCont; int& iCont; int& oCont; int& uCont int& nonVowl) { string.str(s) while (string.str(s) <= s) { if (string.at(s) == 'a') aCont++; else if (string.at(s) =='e') eCont++; else if(string.at(s) == 'i') iCont++; else if(string.at(s) == 'o') oCont++; else if (string.at(s) == 'u') else nonVowl++; string.str(s)++; } }
Forgive me if its 100 percent off, but as ive said im quite clueless on what I have to do.
Edited by soapyillusion: n/a | https://www.daniweb.com/programming/software-development/threads/258961/vowel-counting-program | CC-MAIN-2018-43 | refinedweb | 240 | 84.07 |
Related
Join 1M+ other developers and:
- Get help and share knowledge in Q&A
- Get courses & tools that help you grow as a developer or small business owner
Question
trying to install R, but no ldpaths and Renvirons is missing
I am trying to install R, using apt-get but I keep getting this error:
/usr/bin/R: line 248: /usr/lib/R/etc/ldpaths: No such file or directory cannot find system Renviron. Error: package or namespace load failed for ‘utils’: .onLoad failed in loadNamespace() for 'utils', details: call: options(op.utils[toset]) error: invalid value for 'editor' Error: package or namespace load failed for ‘stats’: .onLoad failed in loadNamespace() for 'utils', details: call: options(op.utils[toset]) error: invalid value for 'editor' During startup - Warning messages: 1: package ‘utils’ in options("defaultPackages") was not found 2: package ‘stats’ in options("defaultPackages") was not found
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.× | https://www.digitalocean.com/community/questions/trying-to-install-r-but-no-ldpaths-and-renvirons-is-missing | CC-MAIN-2021-43 | refinedweb | 181 | 50.57 |
Intrepid: Kubuntu printer dialogue doesn't open
Bug #251111 reported by Dave Morley on 2008-07-23
Bug Description
Binary package hint: system-
After clicking on the icon for printing in Application/System the icon appears in the task bar but no window opens.
Related branches
Steve Langasek (vorlon) on 2008-07-28
This import error does not look hard to fix, missing dependency? Or is it already fixed in the current version?
Till, since Jonathan isn't available this week, could you please have a look?
Jonathan says it's fixed now.
Fixed in 0.10.
Sorry but it is NOT fixed. I still have the same error. and I have version .10 installed.
It's fixed in 0.11 I've tried it and it should be in the repo's for tomorrow
Can't find any 0.11 in the repo's - has it been published yet?
Confirmed on today's kubuntu i386.
Traceback (most recent call last):
system- config- printer/ system- config- printer- kde.py" , line 74, in <module>
File "/usr/share/
import ppds
ImportError: No module named ppds | https://bugs.launchpad.net/ubuntu/+source/system-config-printer-kde/+bug/251111 | CC-MAIN-2017-39 | refinedweb | 183 | 77.33 |
On Sun, 2002-12-08 at 20:51, Nick Phillips wrote: > /me wonders whether some concept of namespaces in package names would > be useful before we make it too easy for world + dog to run large > repositories of .debs - Ximian was bad enough on its own, last I had to > recover a system from someone using it... I dread to think how many > versions of things like libgtksomeguicrapthatkeepsmakingabichanges > (all mutually conflicting, and all required by something you *really > need*) we'll end up with if people are easily able to maintain separate > repositories. > I disagree that this is needed, not for any of the usual reasons, but for the simple reason that this functionality already exists. The namespace of an apt repository is its URL, and any information available in a "Release" file at that URL. Now, let's use your example of Ximian. The ftp URL of the Ximian debs you were using was probably: I imagine the problem you had was that the system had all the Ximian GNOME debs installed, and you wanted to use those from Debian instead. That could have been easily solved by putting the following in /etc/apt/preferences: Package: * Pin: release o=Debian Pin-Priority: 1000 Package: * Pin: origin Pin-Priority: -1 In effect, "Debian packages can force a downgrade of anything, do not consider Ximian packages for installation at all" If we promote the use of third-parties using Release files, they would set the "Origin:" tag to something useful to them, perhaps in Ximian's case "Ximian". All the functionality you want is already there! Scott -- Scott James Remnant Have you ever, ever felt like this? Had strange things happen? Are you going round the twist?
Attachment:
signature.asc
Description: This is a digitally signed message part | https://lists.debian.org/debian-devel/2002/12/msg00608.html | CC-MAIN-2017-47 | refinedweb | 297 | 57.81 |
Summary
I found this in my drafts, dated Feb 6 2005. I 'll just push it out now unedited. Original summary: I thought it was clear that we should add interfaces to Python, but Phillip Eby reminded me that years ago I rejected them in favor of Abstract Base Classes (ABCs). Why? I don't remember! Which do you prefer?
I can't for the life of me remember why I would prefer ABCs over interfaces! And even if I did remember, I believe I have changed my mind since then.
The only argument that comes to mind is that ABCs don't require more syntax. That's usually a strong argument in Python. But it seems that at least two of the largest 3rd party projects in Python (Zope and Twisted) have already decided that they can't live without interfaces, and have created their own implementation. From this I conclude that there's a real need.
But if they can do it themselves (albeit with some heavy-duty metaclass magic), doesn't that prove we don't have to add them to the language? Not at all. The mechanisms used are ideosyncratic, fragile, and cumbersome. For example, you have to use __xxx__ words to declare conformance to an interface. Plus, there's duplicate work and interoperability. IMO this is language infrastructure that should be provided by the language implementation.
Many followups to my blogs about adding optional type checking to Python (given the latest design I'm leaving off the word "static" :-) said that interfaces were more important than type checking. Personally, I think they're interconnected: interfaces make much more sense if you can also declare the argument types of the methods, and argument type declarations in Python are unthinkable without a way to spell duck types -- for which interfaces are an excellent approach. Phillip Eby's Monkey Typing proposal is an interface-free alternative, but I find it way too complex to be adopted as a standard Python mechanism.
A few more arguments against ABCs: they seem the antithesis of duck typing. Using ABCs for type declarations suggests that isinstance() is used for type checking, and even if reality is not quite that rigid, this suggestion would be leading people into the wrong direction.
ABCs also allow, nay, encourage, "partially abstract" classes -- classes that have some abstract methods and some concrete ones. Of course, such a class as a whole is still abstract, but the resulting mixture of implementation and interface complexifies the semantics.
It has been suggested, especially by Ping, that there is a need to specify some semantics in interfaces. A typical example involves a file, where various operations such as readline() and readlines(), can be implemented by default in terms of a more primitive operation -- in this case read(). Unfortunately, I believe that this approach is not very practical, since those default implementations are usually inefficient, and the choice of the "most primitive" operation is often dependent on the situation. I also suspect that outside some common standard types there aren't all that many uses for this pattern. But if you have to do this, a mix-in class for the "default" functionality separate from the interface would work just as well as combining the two.
Specifically because they are not classes, interfaces allow for clear, distinct semantics. (That is, the semantics of interface objects; I intend for interfaces to be neutral on the issue of the semantics of the objects they describe.) For example, (not that I necessarily propose this, but this is one way that we could decide to go), in a class claiming to implement a particular method declared in an interface, it could be flagged as an error if the actual implementation required more arguments than declared in the interface, or (assuming we can have type declarations in interfaces as well as in classes) if the argument types didn't match.
Python has a strong tradition that subclasses may redefine methods with a different signature, and making that an error goes against the grain of the language. But the explicit use of an interface changes things and there is seems appropriate that a class should not be allowed to violate an interface it claims to implement.
So, while I haven't decided that Python 3.0 will have interfaces (or type declarations), I'd like to go ahead and hypothesize about such a future, and look at some of the standard interfaces the language would provide for various common protocols like sequence, mapping and file.
Let's start with files, because they don't require genericity to fully specify the interface. Here's my first attempt. Note that I'm simplifying a few things; I'd like to drop the optional argument to readline() and readlines(), and I'm dropping the obsolete API xreadlines():
interface file: def readline() -> str: "Returns the next line; returns '' on EOF" def next() -> str: """Returns the next line; raises StopIteration on EOF. next() has one special property: due to internal buffering, mixing it with other operations is not guaranteed to work properly unless seek() is called first. """ def __iter__() -> file: "Returns self" def read(n: int = -1) -> str: """Reads n bytes; if n < 0, reads until EOF. This blocks rather than returning a short read unless EOF is reached; however not all implementations honor this property. (Did you know the default argument was -1?) """ def readlines() -> list[str]: "Returns a list containing the remaining lines" def seek(pos: int, how: int = 0) -> None: """Sets the position for the next read/write. The 'how' argument should be 0 for positioning relative to the start of the file, 1 for positioning relative to the current position, and 2 for positioning relative to the end of the file. """ def tell() -> int: "Returns the current read/write position" def write(s: str) -> None: "Writes a string" def writelines(s: list[str]) -> None: """Writes a list of strings. Note: this does not add newlines; the strings in the list are supposed to end in a newline. """ def truncate(size: int = ...) -> None: """Truncates the file to the given size. Default is the current position. """ def flush() -> None: "Flushes buffered data to the operating system." def close(): "Closes the file, rendering subsequent use invalid" def fileno() -> int: """Returns the underlying 'Unix file descriptor'. Not all file implementations may support this, and the semantics are not always the same. """ def isatty() -> bool: "Returns whether this is an interactive device" # Attributes softspace: int # read-write attribute used by 'print' # The following are read-only and not always supported mode: str # mode given to open name: str # file name given to open encoding: str|None # file encoding closed: bool # whether the file is closed newlines: None|str|tuple(str) # observed newline convention
This brings up a number of interesting issues already:
-
The file interface does not include standard object methods and attributes such as __repr__() and __class__. But it does include __iter__() since this is not supported by all objects.
-
Moreover __iter__() is defined different than the "generic" definition of __iter__() would be: since we know that a file's __iter__ method returns the file itself, we know its type. This is in general the case for standard APIs that are explicitly part of a specific interface; we'll see this again for __getitem__ later.
-
The argument to writelines() and the return value from readlines() are lists of strings. I really want to be able to express that in the interface definition. Even in Pascal, which has such a nice simple type system, you can say this! I'm using list[str], following the notation I used in an earlier blog where I was brainstorming about generic types.
-
Rather than distinguishing between None and its type, for conciseness I'm using the singleton value as its own type. While type(None) is not None and never will be, in type expressions, None stands for type(None).
-
In a few places, a value may be either a string or None; or either an int or None. I'm using the notation str|None respectively int|None for this, which also debuted in an earlier blog.
-
The argument to truncate() has a dynamic default. I'm proposing the notation ... for this; I don't want to say -1 because passing int -1 doesn't have the same effect, unlike for read(). The semantics of this notation are that an implementation may choose its own default but that it must provide one.
-
There's the thorny issue of some APIs that aren't always defined. I'm not going to introduce a notation for this yet; rather, I'll just say it in a comment. The default type checking algorithm will accept partial implementations of an interface.
-
The file interface has a few attributes, one of which (softspace) is writable. This must be supported or else the print statement won't work righty when directed to such a file. For now I'm using the notation:name: type
and indicating the read-write-ness in a comment. The notation is less than ideal because it doesn't allow to attach a doc string. I could use this:name: type "docstring"
but I fear that Python's parser isn't smart enough to always know where the type expression ends and the docstring begins (since 'type' can be an expression, syntactically).
-
Note that softspace is conceptually a bool, but implemented as an int, and that's how it's declared here.
-
The return type of close() is problematic. Usually it is None, but for file objects returned by os.popen() is is an int. I've chosen to leave out the '-> None' notation on the close() method, leaving its return type unspecified. I could also have written '-> int|None'. Or we could have a rule that allows a method that is declared to return None to return a different type, perhaps after subclassing.
-
It would be lovely to be able to declare exceptions, even if we don't assign any semantics to this (Java checked exceptions have turned out to be a horrible thing in practice). But I'm leaving this to a future brainstorm.
-
What about argument names and keyword parameters? In the above example, I don't intend to allow keyword parameters on any of the interfaces. But what if an interface wants to define keyword parameters? What if you want to require certain parameters to be given as keyword parameters (and you still want to declare their types)? Maybe we need a notation to explicitly say that an argument can or must be a keyword parameter? Or maybe it would be sufficient to allow leaving out the parameter name if it is supposed to be always positional? Then the declaration of read() would become:def read(:int = -1) -> str: "reads some bytes [...]"
Here's my attempt at defining a generic sequence interface. Note that I'm declaring this as a generic type, with 'T' being the type parameter. This despite my earlier promise not to bother with generic types. I think they are both useful and easy to implement, even if there are some thorny issues left: a dynamic check for list[int] is very expensive (it has to check every item in the list for int-ness) and any mutation of the list might change its type:
interface iterator[T]: def __iter__() -> iterator[T]: "returns self" def next() -> T: "returns the next item or raises StopIteration" interface iterable[T]: """An iterable should preferably implement __iter__(). __getattr__() is a fallback in case __iter__ is not defined. Note that an iterator is a perfect candidate for an iterable, by virtue of its __iter__() method. """ def __iter__() -> iterator[T]: "returns an iterator" def __getitem__(i: int) -> T: "returns an item" interface sequence[T]: @overloaded def __getitem__(i: int) -> T: "gets an item" @overloaded def __setitem__(i: int, x: T) -> None: "sets an item" @overloaded def __delitem__(i: int) -> None: "deletes an item" def __iter__() -> iterator[T]: "returns iterator" def __reversed__() -> iterator[T]: "returns reverse iterator" def __len__(): int: "returns number of items" def __contains__(x: T): bool: "returns whether x in self" def __getslice__(lo: int, hi: int) -> sequence[T]: "gets a slice" def __setslice__(lo: int, hi: int, xs: iterable[T]) -> None: "sets a slice" @overloaded def __getitem__(x: slice) -> sequence[T]: "gets an extended slice" @overloaded def __setitem(x: slice, xs: iterable[T]) -> None: "sets an extended slice" @overloaded def __delitem__(x: slice) -> None: "deletes an extended slice" def __add__(x: iterable[T]) -> sequence[T]: "concatenation (+)" def __radd__(x: iterable[T]) -> sequence[T]: "right-handed concatenation (+)" def __iadd__(x: iterable[T]) -> sequence[T]: "in-place concatenation (+=)" def __mul__(n: int) -> sequence[T]: "repetition (*)" def __rmul__(n: int) -> sequence[T]: "repetition (*)" def __imul__(n: int) -> sequence[T]: "in-place repetition (*=)" # The rest are all list methods -- should we really define these? def append(x: T) -> None: "appends an item" def insert(i: int, x: T) -> None: "inserts an item" def extend(xs: iterable[T]) -> None: "appends several items" def pop(i: int = -1) -> T: "removes and return an item" def remove(x: T) -> None: "removes an item by value; may raise ValueError" def index(x: T) -> int: "returns first index where item is found; may raise ValueError" def count(x: T) -> int: "returns number of occurrences" def reverse() -> None: "in-place reversal" # But not sort() -- that's really only a list method
Some additional issues with this:
- The syntax for declaring a generic interface (interface X[T]) requires a bit of a leap of faith. But without parameterization we can say so much less about a sequence than what is common knowledge (and what a type inferencer should know) that I find it nearly useless to bother defining a sequence type without this notation. Possibly an implementation that ignores the type parameter T would be acceptable; use of T would be purely for the benefit of the human reader.
- I had to introduce two auxiliary interfaces:
- iterator, something with primarily a next() method
- iterable: something with primarily an __iter__() method, although something implementing __getitem__() will also work. That makes its declaration a bit awkward (with both methods being optional but at least one being required).
- I struggled a bit with the two possible signatures for __getitem__ and friends: it is normally called with an int argument, returning a single item, but the extended slice notation (e.g. seq[1:2:3]) calls it with an argument that is a slice object, and then it returns a sequence. Declaring the argument and return types as unions feels unsatisfactory because it throws away information. I decided to use the @overloaded decorator, which can be implemented using a small amount of namespace hacking.
- Should we have separate interfaces for immutable and mutable sequences? For now I'd rather only have one; the notion that an implementation may leave out methods naturally allows for immutable sequences.
- Should the sequence interface mention virtually everything that a list can do, or should it be minimal?
- Even if the sequence interface is inclusive (containing most list methods), I'd like to leave sort() out of it; sort() is really unique to the list type, and even if some user type defines a sort() method, it's unlikely to have the same signature as list.sort() (especially after what we did to this signature in Python 2.4). Feel free to prove me wrong.
- In current Python, the + operator on standard sequence types only accepts a right operand of the same type (list, tuple or str). But the += operator on a list accepts any iterable! I think + on any two sequences, or even a sequence and an iterable, in either order, should be allowed, and should return a new sequence. However, iterable + iterable should be left undefined; this is because iterator + iterator is not defined, and I think it should not be.
XXX
Have an opinion? Readers have already posted 42 comments about this weblog entry. Why not add yours?
If you'd like to be notified whenever Guido van van Rossum adds a new entry to his weblog, subscribe to his RSS feed. | http://www.artima.com/weblogs/viewpost.jsp?thread=92662 | CC-MAIN-2015-18 | refinedweb | 2,691 | 58.92 |
Now that the Visual Studio 2015 Preview is available and the C# 6 feature set is a bit more stable, I figured it was time to start updating the Noda Time 2.0 source code to C# 6. The target framework is still .NET 3.5 (although that might change; I gather very few developers are actually going to be hampered by a change to target 4.0 if that would make things easier) but we can still take advantage of all the goodies C# 6 has in store.
I’ve checked all the changes into a dedicated branch which will only contain changes relevant to C# 6 (although a couple of tiny other changes have snuck in). When I’ve got round to updating my continuous integration server, I’ll merge onto the default branch, but I’m in no rush. (I’ll need to work out what to do about Mono at that point, too – there are various options there.)
In this post, I’ll go through the various C# 6 features, and show how useful (or otherwise) they are in Noda Time.
Read-only automatically implemented properties (“autoprops”)
Finally! I’ve been waiting for these for ages. You can specify a property with just a blank getter, and then assign it a value from either the declaration statement, or within a constructor/static constructor.
So for example, in
DateTimeZone, this:
private static readonly DateTimeZone UtcZone = new FixedDateTimeZone(Offset.Zero); public static DateTimeZone Utc { get { return UtcZone; } }
becomes
public static DateTimeZone Utc { get; } = new FixedDateTimeZone(Offset.Zero);
and
private readonly string id; public string Id { get { return id; } } protected DateTimeZone(string id, ...) { this.id = id; ... }
becomes
public string Id { get; } protected DateTimeZone(string id, ...) { this.Id = id; ... }
As I mentioned before, I’ve been asking for this feature for a very long time – so I’m slightly surprised to find myself not entirely positive about the results. The problem it introduces isn’t really new – it’s just one that I’m not used to, as I haven’t used automatically-implemented properties much in a large code base. The issue is consistency.
With separate fields and properties, if you knew you didn’t need any special behaviour due to the properties when you accessed the value within the same type, you could always use the fields. With automatically-implemented properties, the incidental fact that a field is also exposed as a property changes the code – because now the whole class refers to it as a property instead of as a field.
I’m sure I’ll get used to this – it’s just a matter of time.
Initial values for automatically-implemented properties
The ability to specify an initial value for automatically-implemented properties applies to writable properties as well. I haven’t used that in the main body of Noda Time (almost all Noda Time types are immutable), but here’s an example from
PatternTestData, which is used to provide data for text parsing/formatting tests. The code before:
internal CultureInfo Culture { get; set; } public PatternTestData(...) { ... Culture = CultureInfo.InvariantCulture; }
And after:
internal CultureInfo Culture { get; set; } = CultureInfo.InvariantCulture; public PatternTestData(...) { ... }
It’s worth noting that just like with a field initializer, you can’t call instance members within the same class when initializing an automatically-implemented property.
Expression-bodied members
I’d expected to like this feature… but I hadn’t expected to fall in love with it quite as much as I’ve done. It’s really simple to describe – if you’ve got a read-only property, or a method or operator, and the body is just a single return statement (or any other simple statement for a void method), you can use
LocalDateTime (one property, one operator, one method – they’re not next to each other in the original source code, but it makes it simpler for this post):
=>to express that instead of putting the body in braces. It’s worth noting that this is not a lambda expression, nor is the compiler converting anything to delegates – it’s just a different way of expressing the same thing. Three examples of this from
public int Year { get { return date.Year; } } public static LocalDateTime operator +(LocalDateTime localDateTime, Period period) { return localDateTime.Plus(period); } public static LocalDateTime Add(LocalDateTime localDateTime, Period period) { return localDateTime.Plus(period); }
becomes
public int Year => date.Year; public static LocalDateTime operator +(LocalDateTime localDateTime, Period period) => localDateTime.Plus(period); public static LocalDateTime Add(LocalDateTime localDateTime, Period period) => localDateTime.Plus(period);
In my actual code, the operator and method each only take up a single (pretty long) line. For some other methods – particularly ones where the body has a comment – I’ve split it into multiple lines. How you format your code is up to you, of course :)
So what’s the benefit of this? Why do I like it? It makes the code feel more functional. It makes it really clear which methods are just shorthand for some other expression, and which really do involve a series of steps. It’s far too early to claim that this improves the quality of the code or the API, but it definitely feels nice. One interesting data point – using this has removed about half of the return statements across the whole of the NodaTime production assembly. Yup, we’ve got a lot of properties which just delegate to something else – particularly in the core types like
LocalDate and
LocalTime.
The nameof operator
This was a no-brainer in Noda Time. We have a lot of code like this:
public void Foo([NotNull] string x) { Preconditions.CheckNotNull(x, "x"); ... }
This trivially becomes:
public void Foo([NotNull] string x) { Preconditions.CheckNotNull(x, nameof(x)); ... }
Checking that every call to
Preconditions.CheckNotNull (and
CheckArgument etc) uses
nameof and that the name is indeed the name of a parameter is going to be one of the first code diagnostics I write in Roslyn, when I finally get round to it. (That will hopefully be very soon – I’m talking about it at CodeMash in a month!)
Dictionary initializers
We’ve been able to use collection initializers with dictionaries since C# 3, using the
Add method. C# 6 adds the ability to use the indexer too, which leads to code which looks more like normal dictionary access. As an example, I’ve changed a “field enum value to delegate” dictionary in
TzdbStreamData) }, ... }), ... }
One downside of this is that the initializer will now not throw an exception if the same key is specified twice. So whereas the bug in this code is obvious immediately:
var dictionary = new Dictionary<string, string> { { "a", "b" }, { "a", "c" } };
if you convert it to similar code using the indexer:
var dictionary = new Dictionary<string, string { ["a"] = "b", ["a"] = "c", };
… you end up with a dictionary which only has a single value.
To be honest, I’m now pretty used to the syntax which uses
Add – so even though there are some other dictionaries initialized with collection initializers in Noda Time, I don’t think I’ll be changing them.
Using static members
For a while I didn’t think I was going to use this much – and then I remembered
NodaConstants. The majority of the constants here are things like
MillisecondsPerHour, and they’re used a lot in some of the core types like
Duration. The ability to add a
using directive for a whole type, which imports all the members of that type, allows code like this:
public int Seconds => unchecked((int) ((NanosecondOfDay / NodaConstants.NanosecondsPerSecond) % NodaConstants.SecondsPerMinute));
to become:
using NodaTime.NodaConstants; ... public int Seconds => unchecked((int) ((NanosecondOfDay / NanosecondsPerSecond) % SecondsPerMinute));
Expect to see this to be used a lot in trigonometry code, making all those calls to
Math.Cos,
Math.Sin etc a lot more readable.
Another benefit of this syntax is to allow extension methods to be imported just from an individual type instead of from a whole namespace. In Noda Time 2.0, I’m introducing a
NodaTime.Extensions namespace with extensions to various BCL types (to allow more fluent conversions such as
DateTimeOffset.ToOffsetDateTime()) – I suspect that not everyone will want all of these extensions to be available all the time, so the ability to import selectively is very welcome.
String interpolation
We don’t use the system default culture much in Noda Time, which the string interpolation feature always does, so string interpolation isn’t terribly useful – but there are a few cases where it’s handy.
For example, consider this code:
throw new KeyNotFoundException( string.Format("No calendar system for ID {0} exists", id));
With C# 6 in the VS2015 preview, this has become
throw new KeyNotFoundException("No calendar system for ID \{id} exists");
Note that the syntax of this feature is not finalized yet – I expect to have to change this for the final release to:
throw new KeyNotFoundException($"No calendar system for ID {id} exists");
It’s always worth considering places where a feature could be used, but probably shouldn’t.
ZoneInterval is one such place. Its
ToString() feature looks like this:
public override string ToString() => string.Format("{0}: [{1}, {2}) {3} ({4})", Name, HasStart ? Start.ToString() : "StartOfTime", HasEnd ? End.ToString() : "EndOfTime", WallOffset, Savings);
I tried using string interpolation here, but it ended up being pretty horrible:
- String literals within string literals look very odd
- An interpolated string literal has to be on a single line, which ended up being very long
- The fact that two of the arguments use the conditional operator makes them harder to read as part of interpolation
Basically, I can see string interpolation being great for “simple” interpolation with no significant logic, but less helpful for code like this.
Null propagation
Amazingly, I’ve only found a single place to use null propagation in Noda Time so far. As a lot of the types are value types, we don’t do a lot of null checking – and when we do, it’s typically to throw an exception if the value is null. However, this is the one place I’ve found so far, in
BclDateTimeZone.ForSystemDefault. The original code is:
if (currentSystemDefault == null || currentSystemDefault.OriginalZone != local)
With null propagation we can handle “we don’t have a cached system default” and “the cached system default is for the wrong time zone” with a single expression:
if (currentSystemDefault?.OriginalZone != local)
(Note that
local will never be
null, or this transformation wouldn’t be valid.)
There may well be a few other places this could be used, and I’m certain it’ll be really useful in a lot of other code – it’s just that the Noda Time codebase doesn’t contain much of the kind of code this feature is targeted at.
Conclusion
What a lot of features! C# 6 is definitely a “lots of little features” release rather than the “single big feature” releases we’ve seen with C# 4 (dynamic) and C# 5 (async). Even C# 3 had a lot of little features which all served the common purpose of LINQ. If you had to put a single theme around C# 6, it would probably be making existing code more concise – it’s the sort of feature set that really lends itself to this “refactor the whole codebase to tidy it up” approach.
I’ve been very pleasantly surprised by how much I like expression-bodied members, and read-only automatically implemented properties are a win too (even though I need to get used to it a bit). Other features such as static imports are definitely welcome to remove some of the drudgery of constants and provide finer-grained extension method discovery.
Altogether, I’m really pleased with how C# 6 has come together – I’m looking forward to merging the C# 6 branch into the main Noda Time code base as soon as I’ve got my continuous integration server ready for it…
44); | https://codeblog.jonskeet.uk/2014/12/08/c-6-in-action/?like_comment=15214&_wpnonce=a54d768ce6 | CC-MAIN-2022-21 | refinedweb | 1,977 | 50.87 |
list="1,2,3,4,5,6"; while(x = getNextInList(list)) { print(x); }This can be implemented by simply chopping off one item per iteration in the getNextInList function (pop). I cannot believe you actually show that code. It's, IMHO, ugly... The list cannot contains object of other type (string, file, socket). One will never use that as a replacement for list. Did you think even a bit before showing this example? Or did you just want to put up something?
re = createRegexp(); g1 = createMatchManyNotRange('0','9'); re.addGroup(g1); g2 = createMatchManyOfRange('0','9'); re.addGroup(g2); re.match("hello 1234"); if(re.matchGroup(g1)){ // do 1... } if(re.matchGroup(g2)){ // do 2... }and
regexp-do [ /([^0-9]*)/ { do 1... }, /([0-9])*/ { do 2... } ]I think the point that allowing one to build a language for their library is moot. Because, after all, when you are using a library, you are really using their new language, the API IS the invented language. The only different is that, in language that does not allow FP. the invented language looks ugly, you see kludge that shows how the new language doesn't get treated as first class language construct. It's like if you have to draw a picture, one system only let you write description of picture, while another just let you draw the picture. What's the different? I am not sure what your example is trying to illustrate. There are many ways to clean up reg-ex syntax/API's without using FP. We are wondering away from challenge #6 here anyhow. If you wish to start a topic such as FpMakesRegexBetter?, be my guest. I'm not saying there is no way to clean up Regexp API without using FP. I'm saying that there's NO WAY to clean up Regexp API without reinvent custom language (the regexp part). Just like there is no way to Clean up Database query without having invented SQL language. It shows that there is no benefit of disallowing one to reinvent languages for specific domains. Because, after all, API call is also an invented language; the different is just that it's invented on the language that discourage inventing another sublanguage; That's why it's uglier than another. Move Some to ApiIsLanguage Languages such as Tcl are pretty good at allowing/making sublanges without explicit FP, I would note. Yes, by using eval. However, we already talk about how using HOF is more suitable than Eval in DynamicStringsVsFunctional. I have not seem decent real-world/realistic examples there.
run().This code start up the database server running 24*7, with full logging, and administration for The company name taken from Machine environment variable. Other flexibility is out of my language's scope. Now this is my challenge, you TOP mind please do coding that is shorter than this in your language? Or else your claim of TOP is null and void*. And don't challenge my language and paradigm by other algorithm, 'cause that is out of my language's scope.!!! </rant> Is there a point to this? Hints of DomainPissingMatch. Sure there is, I'm tired of seeing top, saying "No, that part of the problem is not in my application domain." He tried to say HOF has no benefit over his TOP oriented, but when someone showed how HOF is used to achieve simplicity in some problem top can not do, he would just said that it was not in his domain. He is not trying to learn anything. There may be smug lisp weeny, But even those guy knows about what ever programming practice you are talking about, Top is just not so. Me as a CRUD-SCREEN-REPORT-001 programmer, I would just said Top things are useless because, I do all I want in just one line of code and I don't wanna learn anything new.
define output_entries( blog_entries, output_type=Outputs::HTML ) for each entry in blog_entries do if( output_type == Outputs::RSS ) output << RSSEntry( entry ) elsif( Output.type == Outputs::HTML ) output << HTMLEntry( entry ) else output << TextEntry?( entry )And if someone wanted to add new types, they'd have to extend this further. This is exactly the approach that TopMind said he espoused, and you can see on ArrayDeletionExample. This is basically a glorified case statement. It's annoying, hard to extend, and prone to breakage. Let's rewrite it with a HigherOrderFunction in mind.
define output_entries( blog_entries, output_func ) for each entry in blog_entries do output << output_func( entry )Now, if we were to add a few HOFs, like the following, we won't see a massive lines-of-code savings right away:
output_as_html = { |entry| return HTMLEntry( entry ) } # HOF that returns text of HTMLEntry on its argument output_as_rss = { |entry| return RSSEntry( entry ) } # HOF that returns text of RSSEntry on its argument output_as_text = { |entry| return TextEntry?( entry ) } # HOF that returns text of TextEntry? on its argumentOr we might see it used inline, as is popular in Ruby and Lisp:
output_entries( blog_entries, { |entry| return HTMLEntry( entry ) } )We can invoke output entries with whatever we want. We've saved a little code, but not too much. Within 5%, as TopMind says. But, what happens when something changes? For example, it turns out our users are reporting that every so often, bogus entries are coming up in their blogs. Maybe our master database queries are broken somehow? But we can't easily reproduce this problem on our test server. So what we need to do is put a log on the live system and hope we catch the bug in action. Well, with the original case approach, this would suck. We'd have to go through each entry in the case statement and add the same code over and over again. Cut'n'paste code at its worst, and clearly it's a pain to handle. But if we had the higher-order-function version, it's simple and painless and reliable!
# Assume that the script parsed the URL above, and set "desired_output" as one of # our output_as_* functors. output_entries( blog_entries, { |entry| debug << DebugInfo?( entry ) ; return desired_output( entry ) }See? That's much less code. Using HigherOrderFunctions makes your code more flexible. Imagine if only some of the blog sites were breaking. Since you might have thousands of users, you might want to be able to dynamically enable this logging to save on disk space and I/O bandwidth until they report the problem. Again, this would be easy with higher order functions, but much harder with the case-like statement. You'd have to add if-statements into every if statement, massively increasing the size of the code. Flexible code leads to long term savings. Even though it wasn't immediately obvious, the design paid off when we had to modify our code. This kind of savings often comes when you least expect it. It's why when I wrote my Weather app (I'm the guy who talked about it), I used them. This is not to say they're a GoldenHammer. It's undeniable that they don't save much code in the short term, and that they are often more complex (especially in languages that don't support them well, like C++). But even if the code savings is minimal, the ease with which someone familiar with the pattern can extend the program shouldn't be taken lightly. You might argue that "Eval" could mimic this behavior, and you'd be partially right. Certainly, eval does something very similar to what HigherOrderFunctions can do. But you're using Eval at the wrong time. Eval is a very slow and powerful tool. Really, it's meant only for use when you have no other option. HigherOrderFunctions have strengths ideal for this, and they can encapsulate eval calls at a later date, if you feel you need them. Claiming that you'd forego HOFs here for Eval is like saying you can't be bothered to take the parking break off your car, it's so much more convenient to just drive with it on, who cares if it ruins your gas mileage and break, right? Cars are getting better all the time. I hope you can see were we've been going with this discussion now, Top. Reply First of all, why are you assuming that each output "type" is mutually exclusive? (Be very careful using the word "type" around me.) Is it not possible that multiple output "types" may be needed per request? You talk about making stuff be change-friendly, and to be change-friendly we cannot (overly) hard-wire the assumption of mutual-exclusiveness into the design. A CollectionOrientedProgramming viewpoint would be to have the output formats (code snippets?) in one table/collection, and the input in another, and then join them as needed. Going from a one-to-many relationship to a many-to-many relationship does not require a change to the original output format table, only the introduction of an intermediate many-to-many table. But, let's stick to a case-statement comparison right now. Second, you claimed, "We'd have to go through each entry in the case statement and add the same code over and over again." I don't see why this would be the case. Let's assume a statement like this:
function outputMessage(msgRecord, outputFormats) { result = ""; initialization_stuff...; if (listContains(outputFormats,'html')) { result .= do_html_stuff(); } else if (listContains(outputFormats,'rss')) { result .= do_rss_stuff(); } else if (listContains(outputFormats,'text')) { result .= do_text_stuff(); } post_processing_stuff...; return(result); }We can put the tracing at the beginning (near "initialization_stuff") or at the end (near "post_processing_stuff"). I see no reason to put it under each of the output format blocks unless it is somehow specific to that format, in which case you have the same issue to contend with. Either it is sharable code or specific. Specific goes inside the decision blocks, general goes outside. I also assumed orthogonality here (an easy change from a case list) by using a potentially multi-value list and also just appended the results to a string. I don't know what would be done in practice because I don't have the full requirements. If they need to be separated into different files or records, that does not change the general strategy here.
general_pre // common to all block A // stuff specific to A pre_A process_A post_A block B // stuff specific to B pre_B process_B post_B block C // stuff specific to C etc... end blocks general_postIf we follow SeparateIoFromCalculation, then we can intercept the results (or pre-sults) outside of the format-specific blocks. As far as your weather example is concerned, like I already said, your claimed limitations are very suspicious. (See "perfect storm" comment under ArrayDeletionExample.) Further, perhaps this discussion goes under ArrayDeletionExample instead of here. However, that topic is getting too long. -- top SwitchStatementsSmell contains somewhat-related discussions about case statements.
print "Hello " + name + "."Replace "Hello " and "." with three page long fairly non-redundant strings, and you will have a six page program that just can't be shortened (ignoring exotic things like data compression). Maybe that is because SQL and to a lesser extent HTML are doing most of the "work". I don't see that as a bad thing. It is a problem-solving approach: use high-level tools to do the work for you. If it is not something that FP can simplify, then we have learned something and the "signif less code" claimers should put caveats into their claims. I am looking for techniques to simplify my domain, not university homework assignments that find the shortest path to all combinations of pi, etc. I chose that example because it is a bit closer to my domain than most of those offered. (But still not a perfect example. A better example would probably have more seemingly arbitrary business rules tossed in.) If you have a biz-oriented example you wish to present, that would be great also.
for(i = 0; i < 100; i++) printf("Hello, world!\n"); for(i = 0; i < 100; i++) printf("Wheeee!\n");The program just doesn't do enough for subroutines to be of any use in shortening it. Subroutines, recursion, data structures, garbage collection... they just won't help. However, they are all capable of vastly reducing code size in more complicated programs. Does that make sense? No. I would like to see an example from a "complex" business application. The trick is to not let something get too complex by dividing it into individual tasks. The large-scale communication is via the database, not so much algorithms. More about this in ProceduralMethodologies. And, I do heavily use subroutines in #6. Anyhow, as far as simplifying the above:
repeatPrint("Hello world", 100); repeatPrint("Wheeee!", 99);But let me get this strait. You are agreeing that FP won't help much with challenge #6 code-size-wise. Correct? Any other PF fans disagree? -- top
Select ... where criteria=''; Delete * from users; ....
and X = 999This assumes the database column is 'X' and the user entered '999'. It basically just uses the WhereAndAnd technique. The "fmtType" column determines whether it has quotes around it or not. And, for strings the default is a "LIKE" operator. But for ranges we don't want an equal comparison. Thus, the "Comparer" column has ">=" for the first range part, and "<=" for the second. If the user types in "5" and "20" for the range, both "listPrice" rows will generate two strings:
1. " and listPrice >= 5" 2. " and listPrice <= 20"These are then internally concatenated together as part of the final SQL statement. (A fancier version could perhaps use an SQL "BETWEEN" clause instead.) Here is the snippet of code that performs most of this process:
'---Comparer if isblank(rs("comparer")) then useComparer = " = " if fmtType = "T" then useComparer = " LIKE " end if else useComparer = space(1) & rs("comparer") & space(1) end ifIt is basically just constructing "AND" sub-clauses for the final SQL, as described in QueryByExample and WhereAndAnd. The two other data dictionary columns that play a significant role in the range approach is the "keepWithPrior" column and the "sequence" column. If "keepWithPrior" is set to "true", then the input field stays on the same line. This is just a formatting nicety issue. What is "prior" is defined by the values in the "sequence" column. The "in stock" option is format type "Y", which produces a pull-down list of "Yes", "No", and "(either)" as the choices. One can also use the "theList" column to enter a comma-delimited list of options. This particular example does not use that feature. A fancier version perhaps could separate the list description and the list value. (Note that I called the column "theList" instead of "List" to avoid overlapping with reserved words. I don't know if it would be an issue in this language or DB, but I generally avoid column names that risk such overlaps.) -- top This is still completely unhelpful. You're explaining "how" and not "what". A good spec states the business requirements, and only the business requirements. It should have no code (unless one of the business requirements is that managers can write code for it), and it should lay out everything that a user can do with the system. -- JonathanTang {That is not true. Only a little bit talks about code. I am mostly describing the UI and how it relates to the control tables (which I suppose is a kind of UI for app admin people). You don't have to use control tables if you don't want. You can use EssExpressions if you prefer. I won't complain if it is anything declarative (non-executable) and fairly easy to edit. Personally I would rather use a table editor for entering such info, but if you dig EssExpressions for whatever reason, go for it. If you want it in requirements format, then try this: "Must provide a way for an experienced operator or administrator to add and edit report configuration information without having to know programming." -- top} Top, it is true. The document is awfully light on requirements and awfully heavy on how something should be done. We're well aware we don't have to use control tables. What we're confused about is the exact specs of this boring system. Tell us what we need to store, what we need to be able to query upon, the limits of those queries, the specifics of the preview function, and all the details in between. You keep telling us the how like it matters or gives us insight. Like I said before, I had to keep referring to the pictures just to figure out what data would be operated on. That's not acceptable as a requirement professionally or personally. The "how" is especially useless when you're asking people to do the "what" in a better way. Could this be a case of GoldenHammerTintedGlasses? :) Ask questions. I added a section below for that with an example. And please stop mentioning that you find it a boring problem. Unless you can tie boredom to being less practical as an example, there is no reason to keep bringing it up. The people who sign our paychecks are not necessarily in business to entertain us. If it bores you that much, just admit FP defeat and move on. If you can find an "exciting" somewhat practical and typical biz example, you are welcome to present it as an alternative. -- top The problem is that you don't sign any paycheck, and as long as you offer it as a challenge, you have to assume the responsibility to make it interesting. Otherwise you have no reason to complain that people criticize it as boring and waste of time.
Boring != Waste_of_TimeOr to be more precise:
Equiv(Boring, Waste_of_Time) != TrueYou have failed to find a better example for biz apps, so you are stuck with mine. One thing I have learned from the biz world is that you better have a ready alternative before criticizing the existing state. Let me break down the math for you, Top:
(Boring * No Money) ^ Trying to educate someone who rarely listens + Student likes to twist examples so he can ignore them = Waste of TimeWe're doing this for you, not because we have some deep-seated desire to implement mind-numbingly boring CrudScreen apps. This kind of work is why we have software interns. Me, I'd like to write something that, you know... does something, besides tell a database what to keep. Do it because you want the geek bragging rights of using FP to kick a skeptic's ass in the biz domain. Sounds like sour grapes from somebody who is stumped to deliver their claim. As far as "rarely listens", I don't want words. They only lead to LaynesLaw battles. I want to see the claims in code, not moving lips. By the way, Phd's in India are only $2 an hour on the world market. Who needs interns? -- top Ahem, I'm a software intern, and the work I did this year is much more interesting than this. This reminds me of the stuff I did freshman year (I didn't go back to that company, BTW), or what I the stuff I still do for free for friends. Except, I'm not a freshman, and we're not that friendly. ;-) BTW, PhD's - whether in India or the U.S. - are a lot more than $2 an hour. There're plenty of good programmers in India. But the good ones get paid as much as the good ones in the U.S. -- JonathanTang I already proved my claim. See the above equation to see my feelings. We showed you examples, you turned them down. I'm not an FP golden hammer addict. I have lots of tools in my toolbox, including most of yours. And no, Top. There are no PhDs? from respected universities making $2/hr for software in India (barring freak examples). This kind of business app has no challenges. It's just like stamping out license plates. It's good for people who can't handle more than mindless repetition. I might do this kind of work to get a product out the door (because I'm being paid, and someone has to do it) or because I'm the FNG of the team, but not because I'd want to. It's like taking pride in making cookies in an EZ-bake oven. Sure it's great, I guess, if you're someone who finds that kind of thing challenging. {You are confusing personal entertainment with use. I too would rather be doing AI research, but there is no money in that except for the lucky few. If CrudScreen stuff were as repetitious as you say it is, then it would be easy to automate and you could put hundreds of thousands of programmers out of business. As explained in DomainPissingMatch, biz apps have plenty of complexity.} This application has no use. While some people may make an honest living doing such work (using tools which help automate it and expedite development, of course), they receive a paycheck for the work. We will not.
filter-for-interested-column( #'select-clause, remove-if-not(#'where-clause, join-table("table1", "table2",...)))you would not have this SQL power if SQL didn't have HOF in its language. If SQL is like C/C++ you will be doing this in SQL:
//hypothesis Non-HOF SQL syntax join_result = join("table1", "table2",...) filtered_result = [] for( record in join_result){ //can't have where-clause because I cannot pass a function into another function if(is_valid_by_where_clause(record)){ filtered_result << record; } } result = [] for( record in filtered_result){ //can't have select-clause because I cannot pass a function into another function result << }You enjoy so much of this expressive power in Data Retrieval Domain Because of HOF. Yet you said HOF is not much useful in your domain, YOU ARE USING IT YOURSELF ALL THE TIME AND YOU CAN'T LIVE WITHOUT IT. Now, In challenge #6 you raise, NO FP language is going to reduce the query code to be smaller than SQL; Because FP with HOF and SQL solutions WILL BE THE SAME, It comes down to as much the same size. But NO language without HOF, is going to be at the same size with FP or SQL. That's why we said HOF is so useful, after all you don't do just select/insert/update/delete. We don't other operation too!! See after you have query all the data out to series of row, with your procedural language, you have to come back to for loop again. But No, FP'ers don't have to do that. They can still enjoy then same expressiveness as in SQL with whatever the operation they are going to use!! They can pass criteria, filtering to everything, not only SQL engine!!! That's where the code of FP language will start to shrink from Your Procedural parts. But, there are so small things to do in that challenge after the data is extracted, people didn't bother to accept the challenge because it wouldn't be anything left to challenge about (You use HOF feature to reduce code size already, and you are challenging FP to use HOF to reduce the code size again; how is that possible?). (Don't bring eval into here, that's another topic). Why don't you want the same level of expressiveness as in your query? You are happy that you have SQL which itself is HOF enabled language, yet you are all against HOF? What is that? This may be a definition issue about what HigherOrderFunctions really are. Your definition seems to be loose enough that the "eval()" techniques I mention may also be considered HOF. Your conjecture is an interesting approach, but I still would like to see an actual example of it reducing code in a biz app. I selected challenge #6 because it is far more representative than the other examples given (not perfect rep in absolute terms, though). But, then you suggest that something about it makes it not very reducible. Let's call it Factor R. What we need to find out is whether this Factor R is because it is closer to a real biz app, or something else altogether. I have found an example of something that is not very reducible by your techniques it seems, so it makes an interesting exploration tool for FP relevancy, would you agree? It could be that SQL already absorbs the primary benefits. SQL and Eval together seem to cover so much of the territory that FP would otherwise cover, that there is not a lot left over for "direct" FP to help with. -- top SQL and eval together *are* FP. Especially when you're storing operations in tables. Again, I think that is a stretch of the definition of FP. Somebody else called expressions in tables OOP. I see it as DataAndCodeAreTheSameThing. My ideal is exposing the run-time engine (AdvantagesOfExposingRunTimeEngine) so that it does not matter, each is just a view of the same thing. I suppose what one calls it depends upon their favored EverythingIsa view. Eval does not really care what is passed to it, it just evaluates it. It could be a function, expression, etc. Whatever they are called, I find them useful. The real issue is whether going to a more "native" or full-blown FP will make a significant difference. I find it hard to show the benefit of HOF to you because you don't even yet know the different between HOF and Eval. Eval can do everything HOF can,
//with each row, a higher order function that would eliminate 6 repeated loops within your sample //and guarantee never having a bug caused by forgetting to movenext, or close. //this is a function you desperately need function withEachRow(sqlString, aFunction){ var rs = stdConn.execute(sqlString); try { while(!rs.eof){ aFunction(rs); rs.movenext(); } } finally { rs.close(); // ohhh, it's better than your code, it'll close the connection, even under an error condition, yours won't. } }For one, we might not want to close it after each QueryAndLoop but leave it open for several loops to gain a little efficiency. Second, good web systems allow global default handlers to be defined for database errors that can close the connection, and even better the connection should automatically close after each HTTP event (submit). There is no reason to keep it open, error or not. Third, sometimes we want different handling for zero or multiple results. For example, a custom "sorry, not found" message for zero result rows, and sometimes there should be one and only one result, such as a count query. Yours is not generic enough to handle all that and making it generic enough will just bloat it up with creeping featuritis, especially in a larger app. Finally, a different contractor would come in and have to learn your custom query looper and all its little features, cranking up the learning curve for shop-specific conventions. I agree that it may shorten some loops by about 1.5 LOC (depending on language), but may result in only a few percent total. The claim was "significant reduction". 5% is not even close to what I consider "significant" to mean. -- top So this one trick reduce 5% of the code. not enough to claim significant reduction. But who say we can only use one trick per app? Bring in another loop HOF construct, reduce another 5%. Bring in some MAP, INJECT. reduce another 10%. and all that it reduced also increase stability in your code; every part that use "withEachRow" will never leave its connection open. hand typing may forget one. reduce 15% of code size and increase stability of code by 15-30% is not something you can do easily. and that's significant because it BOTH reduce code size and increase stability of application. For the record, I don't see why it would make "withEachRow" bloated just to add <close_on_exit?> flag and <if_empty_rersult_do> function callback, both default to current behavior; it will only add three more lines to above code. And you are being funny here about "sometimes query only return one result". Duh, the above code is utility for use in LOOPING over ROWS of result set; if there is only one row then DON'T USE IT. It's funny you are complaining like "'for loop' is not generic enough because in some case we will only want to execute only one time"; Don't use the for loop there, man. For query that return only one value like <count>, you already know that it will never return more than one value, so someone would be so stupid to not understand that above code is not for such usage. And The above code already always close the connection, error or not.
//so you can keep writing your nested conditional code, within the safety of a nice HOF to protect you //and eliminate all that rampant duplication in your sample code for this challenge. withEachRow("Select * from Users", function(rs){ fldValue = trim(rs("fldValue") & "") fmtType = ucase(trim(rs("fmtType") & "")) if fmtType="N" and len(fldValue) > 0 then if not isNumeric(fldValue) then appendErr "Invalid Number: " & fldValue end if end if if fmtType="D" and len(fldValue) > 0 then if not isDate(fldValue) then appendErr "Invalid Date: " & fldValue end if end if if rs("Required") and len(fldValue)=0 then appendErr "Field is Required: '" & rs("fldTitle") & "'" end if }); //hey look, used it again, wow, isn't it flexible, it even works with your code. withEachRow("Select bla bla", function(rs){ sql2 = "UPDATE userFields SET fldValue='" & trim(request("fld_" & rs("itemID"))) & "' " sql2 = sql2 & " WHERE userID=" & userID & " AND rptItemID=" & rs("itemID") stdConn.execute(sql2) }); //I'm betting you could also stand to use this one a lot. function withEachColumn(aRow, aFunction){ for(var index = 0;index < aRow.Fields.Count -1; index++) aFunction(aRow[index]); }I rarely use integer-indexed loops. To me they generally indicate a yellow alert. then say..
withEachColumn(aRow, function(aCol){ hout("<th>" & aCol.Name & "</th>"); });I usually loop through the data dictionary, not map arrays. Plus, maps by definition do not define order, which generally creates problems. Looping thru maps is also a yellow alert. there's all the external evidence you need, if you don't see it, then you're fucking blind, because your program is rampant with duplication that these two simple HOF would eliminate entirely. Below I argue that some of the bloat is due to the design of Microsoft's API rather than an inherent fault of non-FP itself. One may be able to build wrapper libraries, but I didn't bother in the example. Moved discussion to ResultSetSizeIssues.
for(int i = 0; i < length; i++){ .... }than
int i = 0; for(i = 0; i < length; i++){ .... }So FP approach reduce only small code size but also reduce code complexity. How is that not significant. As I state FP is not just significant because it can only reduce code size. FP is significant because it both reduce code size and increase code comprehensibility/abstraction. I agree that is a nice feature, but I find it hard to call "significant". Again, you seem to be offering solutions to problems that I generally don't encounter in practice. I won't say I have never encountered problems from "leaky" loop scopes, but that probably would not make my top 10 list. -- top It is probably possible to have a dynamic language where the scope is only in the given block. Thus, any variable created in a While loop will only be visible to that loop. But it may still clobber existing variables created before. But it is possible to have a language in which inner declarations don't clobber outer ones if explicitly declared. For the record FP version can also be made to read the whole Result set to cache first.
Query("SELECT * FROM employee where name = ?", name_param) #Prepared statement Query("SELECT * FROM employee where name = '" + name_param + "'") #Concatenated StringIn case of Prepared statement the query will work no matter what value are binded to the '?' place. And you can always be sure that it's operation is to do a query on employee based on name. In case of concatenated string, it will work ONLY IF the name_param does not contains any ' or comment character for SQL, any statement can even be added by escaping the string and put ";" to separate a statement. You can not be sure what execution path it is for this SQL unless the name_param is know. This is not kind of Dynamicness one will ever want. Or could you give an example of where that is desirable? Clause addable is always possible with prepared statement. PreparedStatement is not something that has to created at compile time. You can create them on the fly. For example, this prepared statement can be created at runtime:
SELECT * FROM employ WHERE name = ? AND phone = ? address = ? AND ... # add as many "xxx = ?" programmatically.And you can after that bind each '?' to the value. And there is no chance that the query result will change just because some of the data has '#' (comment character) in them. This is the same objection I have for eval. Suppose your eval statement is to do assignment to variable. So you create this:
def eval_append(var_name, var_value) #<var_name> is a string, <var_value> is from DB and never know. eval var_name + " = '" + var_value + "' + 'foo'" #result in, for example, eval "x = 'bar' + 'foo'" endNow this would work fine for
eval_assign("x", "baz")But it will not work this string
eval_assign("x", "boom' ; 'x") # assume ";" is statement separation. now we have: eval "x = 'boom' ; 'x' + 'foo'"I can never think of the reason you will want this. you want <var_name> to always be string you can assign to something no matter what it's value is. (Or do you like a language that "'x = y' works as long as y does not equals 0, 57, and 12357"?) You can not really be sure, with eval and string building, that the execution path of the data can be determine based on only the code you wrote. "Code and data are the same thing" is fine for me; to the extent that I can be sure that what I want them to do to be data doesn't suddenly choose to behave itself as code without my decision. There is a different degree of dynamicness one feels comfortable with. For me, I want dynamicness of code that is reasonable by looking at the code. I will never want some undetermined execution path to come just because some weird data enter in to the systems. Secondly, Eval is hard to play nice with itself. In HOF, one of the best convenient is that a function can take another function as parameter and return function as a result of execution. Can Eval take another a string that is the result of another eval to use in its code, or return a string for another function to use in eval statement? Sure it can but, with the problem that code for eval is just string, you will have to double/triple escape strings depending on where you want it to really behave as string and where you want it to behave as code; which is confusing process to get right at least. As far as prepared statements, I don't care either way. I am tired of hearing about them. Use them if you want and if it affects the code size count in the end, I will accept an adjustment to compensate. PS does not reduce your code size, it is just safer to use than string building. Just like HOF and Eval. As far as Eval going bad, It has not been a problem for me. I focus on where actual problems occur in practice. If I needed to use Eval a lot per app, it might start to become an issue, but I don't. It works just fine for the few places that extra indirection is needed. And, I have not figured out how to store HOF references in a database. But, I can put function names in a database. (It is not an issue for #6, I would note.) But generally I end up putting expressions in the DB for most Eval uses that I recall, not function names alone. So now you lost the claim that putting function dispatch in DB is across language; you can't eval TCL code in Python. I'm not sure what you mean. I do it out of editing convenience and OnceAndOnlyOnce. If you put the same in code then you have to carry the primary key in the code also to match up with the table, which contains the attributes. It then becomes a one-to-one relationship, which is something to generally avoid. For example, to add or subtract something from the table requires you to remember to do the same to the code-based list to keep them in-sync. For example, if you have a menu system where the menu descriptions and nesting are stored in a table(s), then we probably want a way to associate the menu items with behavior. If we cannot put function names or expressions in the table, then somewhere in the code the primary key probably needs to be duplicated:
// duplication example switch on menuID case 234: functionX(...) case 432: functionY(...) case 532: functionZ(...) otherwise: error(...) end switchIf it is in the menu table instead, we don't have to duplicate the key and remember to keep them in sync. Thus, it is better OnceAndOnlyOnce. It's already stated that we can put the function name in DB and use reflection to get the function to call. Don't you remember? You said "If we cannot put function names or expressions in the table...". We can put function name in the table and use reflection to get the function to call, no duplication example above will occur. When I said about you Reflection is a different thing than HOF and closures. You keep adding features to solve problems. You need three different features to do what Eval does. Plus, what if we change it from putting function names in the DB to expressions in the DB? Now you need four features to do what Eval does:
switch(f){ ..case 'foo': foo(); ..case 'bar': bar(); ..case 'baz': baz(); }compare to
(funcall (symbol-function f))Or
eval(funcName)And it scales to parameters, expressions, etc.
data = .... // get data from somewhere code "print '" + data + "';" // print the data as string (see those two 's?). eval code ;But did you notice that data is actually PART OF THE CODE? If data is, Say, " '; print = 'some more". You will end up with a side effect that only happen because the data contains a single-quote which escape the string literal you intended. But looking at the code above, Can this flawed can be easily found by just looking at the code, I don't think so. Will simple test case discover this? (most people will just pass in normal data, no "'", as test data). I don't see how the above is significantly more likely or less testable than any other dynamic-language bug. You keep talking about how risky eval is, but again it has not been a source of big problems in my experience. I don't need it that often, and when I do need it, if one exercises a little care, it does its job. You keep insisting that brand X is going to turn my laundry green, and with more than a decade of use, my laundry hasn't turned green. I can think of only one app where the user created the expressions to be evaled, and it was a local-use app with two power-users, not something all over the company. Most of the rest were either internally generated, or came from a ControlTable. Ok, is there any syntax error in this code?
eval "print('" + data +"');"I bet you cannot tell me whether there is syntax error or not until you know what the data is, correct? Syntax error because of data? Good? I don't understand what you are trying to get at. I restate, in practice, Eval has not been the boogey-man you make it out to be. Quotes and stuff may tend to trip you in particular up, but that does not mean that they trip everybody else. Different people are tripped up by different things. -- top
Benefits of each additional paradigm (illustrative only, not based on actual studies) 1 | *********** 2 | *************** 3 | ***************** 4 | ****************** 5 | ******************* Costs (hiring costs, staffing costs, staff transition delays, training, debugger cost, etc.) 1 | *********** 2 | ************** 3 | ***************** 4 | ******************** 5 | ***********************Businesses want PlugCompatibleInterchangeableEngineers. They will only abandon this goal if you can show a compelling benefit. Going from 3100 lines of code to 3097 lines of code is not even near "compelling". Related: HackerLanguage. -- top Well, much of the discussion on wiki is about how to improve software engineering, not how to kowtow to bad practices propagated by inertia in management and vendor products. Anyway, thanks for the answer, but no thanks to all the extra stuff (including invented graphs) that came with it. -- DanM Business managers sometimes piss me off also, but your assumption that most business practices are inherently wrong is misguided in my opinion and in this FP case I agree with their general rejection of FP. A good case for it has not been made in the domain. Its features seem better suited for systems software and communications tools. I will agree with this though: if staffing/training/education and language-complexity are not factors being weighed, then I could possibly side with you. I am just trying to put myself in the shoes of those who decide on languages and methodologies from a capitalism perspective. Whether that is the "right" perspective is perhaps another entire philosophical topic in itself. But, hopefully we have identified and narrowed down the areas where we differ. -- top
Benefits of each additional paradigm (illustrative only, not based on actual studies) 1 | ******* 2 | ********** 3 | ************* 4 | **************** 5 | ************************ Costs (hiring costs, staffing costs, staff transition delays, training, debugger cost, etc.) 1 | ************ 2 | ************** 3 | ***************** 4 | **************** 5 | ***************Now we all should be using Multiparadigm language shouldn't we? NOTE:
SchemeLanguage Code (map-query "select * from foo where bar = baz" (lambda (x) (printf "~a~%" (* x x))))And this does what? what is "~a~%"? The SQL does not change, so why run it in a loop? It's not in a loop silly the first argument is the query and the second is the body of the loop. its lisp format strings What use is it over more conventional techniques? Stuff like "~a~%" is not very self-documenting, so I'd knock points off for that. (Who has been doing screwy grammar editing of late? Please be a bit more careful.)
for(i = query("select (quux.foo,spam.bar,eggs.baz) from quux,spam,eggs where quux.foo = ? and spam.bar = eggs.baz","abc"); !done(i); next(i)) { printf("%i,%i,%i",i.get[0]+1,i.get[1]+22,i.get[2]*3); }Functional code
foreach(query("select (quux.foo,spam.bar,eggs.baz) from quux,spam,eggs where quux.foo = ? and spam.bar = eggs.baz","abc")) (foo,bar,baz) => { printf("%i,%i,%i",foo+1,bar+22,baz*3) }What is being demonstrated here?
std::vector<std::string> x() { return {"foo", "bar", "baz"}; }Continued at ChallengeSixLispVersionDiscussion... | http://c2.com/cgi/wiki?ChallengeSixVersusFpDiscussion | CC-MAIN-2014-49 | refinedweb | 7,302 | 73.37 |
I have never heard of using namespace std; being required for any standards... Anyone else?I have never heard of using namespace std; being required for any standards... Anyone else?Code:
// This is the Hello World program (HelloWorld.cpp)
// Note: You should always start with a description of your program here.
// Written by: Colin Goble
// Note: Please put your name here or both names if you are working in a pair.
// Date: 19 June 2005
// Note: Put the current date here
// Sources: None
// Note: Always cite all sources you may have used, such as other students you
// may have had help from, web sites you may have referenced, and so forth.
// If none, write None.
// The next few lines are standard boilerplate you will need...
// ...in front of virtually every C++ program you will write this term.
#include <iostream> //Required if your program does any I/O
using namespace std; //Required for ANSI C++ 1998 standard.
int main ()
{
char reply;
cout << "Hello World" << endl;
// This section stops the program 'flashing' off the screen.
cout << "Press q (or any other key) followed by 'Enter' to quit: ";
cin >> reply;
return 0;
} | https://cboard.cprogramming.com/cplusplus-programming/80439-using-namespace-std%3B-printable-thread.html | CC-MAIN-2017-04 | refinedweb | 189 | 81.22 |
PLINQ Operators and Methods
You can modify the behavior of a PLINQ query with a variety of clauses and methods that are actually extension methods of
ParallelQuery<TSource>. Most of these are the same clauses and methods available to LINQ. You can use these operators either independently or together to affect the behavior of a PLINQ query. However, PLINQ also introduces some new constructs, which are introduced in this section.
More Insights
White PapersMore >>
ReportsMore >>
Webcasts
- New Technologies to Optimize Mobile Financial Services
- Developing a User-Centric Secure Mobile Strategy: It's in Reach
The
ForAll Operator
You create a PLINQ query to parallelize your code. In most circumstances, the next step is to iterate the results by using a
foreach or
for method. At that time, the query is most likely performed by using deferred execution. The results are processed in iterations of the
foreach loop. There is only one problem: The
foreach loop is sequential. This is a classic "hurry-up-and-wait" scenario. After executing a PLINQ query, you might want to extend parallelism to handle the results in parallel as well.
LINQ's
Parallel.ForEach method is useful for parallelizing the same operation over a collection of values. It would appear natural to adhere to the same model to process the results of a PLINQ query. PLINQ returns a
ParallelQuery<TSource> type, which represents multiple streams of data. However,
Parallel.ForEach expects a single stream of data, which is then parsed into multiple streams. For this reason, the
Parallel.ForEach method must recognize and convert multistream input to a single stream. There is a performance cost for this conversion.
The solution is the
ParallelQuery<TSource>.ForAll method. The
ForAll method directly accepts multiple streams, so it avoids the overhead of the
Parallel.ForEach method. Here is a prototype of the
ForAll method. The first parameter is the target of the extension method, which is a
ParallelQuery type. The last parameter is an
Action delegate. For the
Action delegate, you can use a delegate, a lambda expression, or even an anonymous method. The next element of the collection is passed as a parameter to the delegate.
public static void ForAll<TSource>( this ParallelQuery<TSource> source, Action<TSource> action )
Here is a short demonstration that illustrates how to use the
ForAll operator. In this example, you will perform a parallel query on a string array and then select and display strings longer than two characters in length.
Perform a parallel query of a string array
1. Create a console application for C# in Visual Studio. In the
Main method, define a string array.
string [] stringArray = { "A", "AB", "ABC", "ABCD" };
2. Perform a PLINQ query on the string array. Select strings with a length greater than two.
var results=from value in stringArray.AsParallel() where value.Length>2 select value;
3. Call the
ForAll operator on the results. In the lambda expression, display the current item.
results.ForAll((item) => Console.WriteLine(item));
Here is the source code for the entire application:
using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ForAll { class Program { static void Main(string[] args) { string[] stringArray = { "A", "AB", "ABC", "ABCD" }; var results = from value in stringArray.AsParallel() where value.Length > 2 select value; results.ForAll((item) => Console.WriteLine(item)); Console.WriteLine("Press Enter to Continue"); Console.ReadLine(); } } }
The application will display ABC and ABCD as the result.
Using
ParallelExecutionMode
So far, we have used the
AsParallel method to convert LINQ to PLINQ. It is a simple change to a LINQ query that alters the semantics completely.
A PLINQ query is not guaranteed to actually execute in parallel. Overhead from executing the parallel query in parallel, such as thread-related costs, synchronization, and the parallelization code, can exceed the performance gain. Determining the relative performance benefit of the PLINQ query is an inexact science based on several factors. Here are some of the considerations that might affect the performance of a PLINQ query:
- Length of operations
- Number of processor cores
- Result type
- Merge options
One of the biggest factors is the duration of the parallel operations, such as the
Select clause. Dependencies and the synchronization that results from them adversely affect the performance of any parallel solution. Furthermore, shorter operations might not be worth parallelizing, because the associated overhead might exceed the duration of the operation. For small operations, you could change the chunking to improve the balance of execution to overhead. Custom partitioners, including those that change the chunk size, are an option.
The number of processor cores might affect the performance of your parallel application, including PLINQ. However, you should typically ignore the number of processor cores, because that's mostly beyond your control. Maintaining hardware independence in your application is important for both scalability and portability.
PLINQ does not consider all of the above factors when deciding to execute a query in parallel. Based on the shape of the query and the clauses used, PLINQ decides to execute a query either in parallel or sequentially. You can override this default by using the
WithExecutionMode clause with the
ParallelExecutionMode enumeration as a parameter. The two options are
ParallelExecutionMode.ForceParallelism and
ParallelExecutionMode.Default. Use the
ParallelExecutionMode.ForceParallelism enumeration to require parallel execution.
The
ParallelExecutionMode.Default value defers to PLINQ for the appropriate decision on the execution mode. Here is an example that forces a parallel PLINQ query.
from item in data.AsParallel() .WithExecutionMode( ParallelExecutionMode.ForceParallelism ) select item;
Using
WithMergeOptions
How the result of your query expression is handled can also affect performance. For example, the following PLINQ query returns a
List<T> type. Converting the PLINQ to a list requires that the results be buffered to return an entire list.
intArray.AsParallel() .Where((value)=>value>5) .ToList();
As mentioned, for the aforementioned code, the results are buffered. In some circumstances, PLINQ might buffer the results, but that is mostly transparent to your code.
Using the .NET Framework 4 thread pool, PLINQ uses multiple threads to execute the query in parallel. The results of these parallel operations are then merged back into the joining thread. The merge option describes the buffering used when merging results from the various threads.
Here are the merge options as defined in the
ParallelMergeOptions enumeration:
NotBuffered: The results are not buffered. For operations such as the
ForAlloperation,
NotBufferedis the default.
FullyBuffered: The results are fully buffered, which can delay receipt of the first result.
AutoBuffered: This option is similar to
NotBuffered, except that the results are returned in chunks.
Default: The default is
AutoBuffered.
You can override the default buffer preference with the
WithMergeOptions operator.
Using
AsSequential
The difference between PLINQ and LINQ starts with the
AsParallel clause. As we've seen, converting from LINQ to PLINQ is often as simple as adding the
AsParallel method to a LINQ query. Here is a basic LINQ query:
numbers.Select(/* selection */ ) .OrderBy( /* sort */ );
Here is a parallel version of the same query, with the required
AsParallel method added.
numbers.AsParallel().Select (/* selection */ ).OrderBy( /* sort */ ); | http://www.drdobbs.com/parallel/plinq-parallel-queries-in-net/232600819?pgno=2 | CC-MAIN-2017-39 | refinedweb | 1,165 | 50.12 |
User talk:Concernedresident
From Uncyclopedia, the content-free encyclopedia
edit --SockMob 04:29, 22 March 2009 (UTC)
Hey so hows my article anyway? And what does "Where are you going with this?" suppose to mean?
- Morning Sock. I had a read of it last night and it just seemed pretty random, so I was curious to know where the joke is going? It looked a bit like one of those pages where someone starts with an idea but then drifts away. Yup, I didn't get the South Park reference. I blame my late night reading skills. --Concernedresident 11:47, 22 March 2009 (UTC)
edit A thought
Perhaps your works in progress are better off being in your namespace. That's what us pros do, anyways. (That's right, I'm totally a pro at this...) —Sir SysRq (talk) 16:56, 22 March 2009 (UTC)
- Ello there. Was there a specific article I messed up on? --Concernedresident 17:33, 22 March 2009 (UTC)
- I was talking more about what you have going on here on your talk page. In general, we like to keep these talk page things free of anything that isn't, you know, talk. If you want somewhere to work on an article, try making it at User:Concernedresident/Article title here. Most people do that for all of their works in progress. —Sir SysRq (talk) 17:40, 22 March 2009 (UTC)
- Ah, that makes sense. Thanks for the tip, and okaying me for colonisation --Concernedresident 17:43, 22 March 2009 (UTC)
- No worries mate. So have you been like kinda in and out over the years? I saw you've got edits in 2007 but I haven't seen much of you to be honest. —Sir SysRq (talk) 17:49, 22 March 2009 (UTC)
- I signed-up in 2007 and had an initial burst of enthusiasm, but then kind of drifted away for a year. I blame World of Warcraft. Came back in late 2008 and started writing some UNnews and a couple of articles. I'm trying to limit the amount of new stuff I write, since I reckon there are a lot of existing articles that needs some lovin' or a mercy killing. --Concernedresident 17:59, 22 March 2009 (UTC)
- Well that's more what IC is here for. I'd still encourage you to write, maybe rewrite a few articles here and there, but when it comes to site improvement via rewrites you're free to fight alongside your fellow Lobsterbacks. See you in the untamed wilderness, my son. Oh, and here in an our or so we're starting the next article so stick around. —Sir SysRq (talk) 21:49, 22 March 2009 (UTC)
edit back again
Nasty back injury kept me out for a while, but I'm back again.
edit Are you some kind of evolutionist?
Are you saying I and muh childrens came from some kind of ape? • <20:22 May 21, 2009>
We don't allow yer kind here. • <20:17 May 21, 2009>
- Ya'll 'aint thinkin' boy. I's been tryin' to edumacate these here atheists, and here you go blowing mah cover. Baby Jesus have mercy on ya boy. --Concernedresident 20:25, 21 May 2009 (UTC)
Is this the UnBook inspired by my dinosaur sojourn? Nice stuff! , 21 May 2009 (UTC)
- Yep, thanks for the idea. I'll a link in there to the original Medozoic story of jungle love. --Concernedresident 20:29, 21 May 2009 (UTC)
- Praise the lord, the shephard, and the mustard seed, anuther Chrishun! I wanna give you my favorite quote from the bible, embroidered on all my couch pillows... It's helped me through some tough times on this site:
You's got Jaysus in ye boy, 'aint no mistakin' that.
--Concernedresident 20:50, 21 May 2009 (UTC)
As it says in the most holy holiest of holy bibles,
- Jesus wept, that original Deuteronomy quote is real. How the hell did I miss that one before? --Concernedresident 23:13, 21 May 2009 (UTC)
- I made an article, Sermon, with a bunch of those things. • <23:36 May 21, 2009>
- Thanks. Evilbible.com only tends to focus on the evil stuff, not the downright odd. --Concernedresident 23:40, 21 May 2009 (UTC)
- Ha. This weekend I really need to spend some time in the Old Testament. --Concernedresident 23:50, 21 May 2009 (UTC)
- Arguably my most favourite bible quote. Because it's so close to the truth, I, 21 May 2009 (UTC)
- All I need to know is whether he's pro or anti pillows. When I found out that Jesus "is against my pillows" I totally freaked. I was like a teenage girl finding out who's hot and who's not! In this case, it was the pillows that were not. • <23:58 May 21, 2009>
- Well, my pillows are always:00, 22 May 2009 (UTC)
- Behold, God is against your hot, sexy pillows • <2:17 May 22, 2009>
- Blasphemy! :53, 22 May 2009 (UTC)
- My spidey sense tells me that there's a stoning in your very near future. Gravel isn't an option! --Concernedresident 22:56, 22 May 2009 (UTC)
edit Woman flees...
I'm glad you liked the article. I'm guessing you must have read Alcott's "Little Women" series! --Clemens177 16:41, 22 May 2009 (UTC)
- No, but I should do. I'm just a fan of shrewd observations and medical history. Got to keep the humours in balance. Very skillfully written article you have there. Good call on the revert. Implied humour beats being slapped in the face with it in this case. --Concernedresident 22:51, 22 May 2009 (UTC)
edit Game
You could have it be like a quiz show thing. "The women of your village have just sewn pillows. Is that good, or should they have rocks thrown at them???" And then, when you choose, it shows the bible quote and it says "Behold, God is against your pillows" right after (repetition joke). If the person gets it wrong, you get sent to "Hell" and automatically lose. • <23:42 May 22, 2009>
- Heh heh. The idea of asking common sense questions that have bloody crazy answers is good. I'll take a look at some of the existing games to see how it can be worked in to an article. Right now I'm not sure about the format. --Concernedresident 01:31, 23 May 2009 (UTC)
- User:Concernedresident/Game:Stone the Sinner • <5:40 May 23, 2009>:47, 23 May Dictated; not read.
- I'm half-blind from whacking off, but I reckon I can fix that article using my heightened sense of smell. I'll poke around there tomorrow. --Concernedresident 19:38, 23 May 2009 (UTC)
edit Welcome to UnNews
I can't remember if I gave you the welcoming, so here it is, just in:36, February 14, 2010 (UTC)
Reverend Zim_ulator says: "There are coffee cup stains on this copy, damnit! Now that's good UnJournalism."
Welcome to UnNews, Concernedresident, UnNews:Zodiac wrong, world copes
Good work! I changed the headline, as shown above, and went through it to do a hyphen hunt and provide the actual name of your source. A conceptual problem in both the two lead paragraphs and the former headline, there are three newses here: (1) we find out the zodiac is wrong, (2) it goes public, and (3) riots, schizophrenia, etc. (PS--Oh yeah, (4) "International community promises response.") Sort of presented in reverse chronological order. Oh well. But would you please code an actual dateline (somehow, LOS ANGELES, California comes to mind) or just get rid of it altogether? Real News Stories aren't written from "Everywhere." Thanks! Spıke Ѧ 00:34 16-Jan-11
- Cheers, Spike. I kind of started that article and then got distracted. I'll go back and have a look. Concernedresident 21:41, February 5, 2011 (UTC)
Very good. The problem with two-week hiatuses on UnNews is that we always move on to new stories. See you again on the next one! Spıke Ѧ 22:14 5-Feb-11
edit Award from UN:REQ | http://uncyclopedia.wikia.com/wiki/User_talk:Concernedresident | CC-MAIN-2015-27 | refinedweb | 1,358 | 82.95 |
Tax
Have a Tax Question? Ask a Tax Expert
Hi and welcome to Just Answer!
Should deductions be allocated to accounting income or corpus.
Deductible expenses are attributable to taxable income first and if anything left - these expenses are covered from corpus.
There are 3 beneficiaries (equal shares) - 2 individuals, one non-profit. I would like to make this the initial and final return. Is this possible?
Yes - that is possible. You need to have complete distribution of all assets and on the form 1041 - in the header - both boxes should be checked "Initial return" and "Final return"
Correspondingly - on the form K-1 - check the box "Final K-1"
Do I have to allocate the excess deductions to the beneficiaries and do they get a tax benefit out of this?You may allocate the excess deductions to the beneficiaries - will or will not they be able to use any passed through deductions - depends on their individual circumstances.
Excess deductions for the final year are generally reported by beneficiaries on schedule A, line 23.
Let me know if you need any help.
Be sure to ask if any clarification needed.
Thank you Lev. How about the the second half of the question?
Sorry I overlooked these items...
I assume the above two items have no tax implications.
You are correct.
1.
Life insurance proceeds paid to you because of the death of the insured person are not taxable unless the policy was turned over to you for a price. This is true even if the proceeds were paid under an accident or health insurance policy or an endowment contract.
2.
As a recipient of inheritance - either the estate or the beneficiary - does not need to claim it as income. Regardless of the value. Please see for reference IRS publication 525 page 33 -
Does the $48K and $12K have to be reported somewhere on return even though it is not income?
No need to report these items on the income tax return. But you may mention these amounts in comments on K-1.
Does the non-profit get mentioned even though only what I assume is corpus will be distributed to it?
As - there are neither income nor deduction items - no need to mention on the income tax return.
Sorry for confusion.
Your information helped me immensely. Last thoughts to make sure I understood correctly:
1) Should my comments on the K-1 say beneficiary will receive $20k [($48K bank accounts + 12K life insurance) * .333]?
2) Should I include this distribution from corpus on line 10, sch B, form 1041?
3)Is it okay to mark the return ending 11/30/2011 final even though the corpus has not yet be distributed? Note, that it will be distributed shortly and no income existed after 11/30/2011 and none ever will
You may but are not required to mention that information on K-1.
Some administrators provide additional letter that is not sent to the IRS with detailed distribution info.
Form 1041 is to report items related to income taxes - those related to income and deductions. No need to mention the distribution from corpus on schedule B.
3)Is it okay to mark the return ending 11/30/2011 final even though the corpus has not yet be distributed?
You may do that if you do not expect any following income tax returns for the trust. However that is not a good practice - because you may not use EIN of the trust after 11/30/2011. For instance - all trust's bank accounts should be closed as EIN would not be valid after that date.
As a general practice - the Final tax return is filed after all trust's assets are distributed and the trust is terminated.
Let me know if you need any help.
Your insights have been fabulous.
To wrap this up finally, will it be ok to file the final return (11/30/2012) with zero income? All that's going to be done from now until 11/30/2012 is to liquidate the bank accounts and give 1/3 each to 2 individuals and one charity.
Yes - you may file the income tax return with zero income and mark it as a final tax return - assuming there is no income in reporting period. | http://www.justanswer.com/tax/773kn-individual-passed-away-dec-2010-completing-1041.html | CC-MAIN-2017-04 | refinedweb | 716 | 64.71 |
C
C is an actively used programming language created in 1972. C (,. Read more on Wikipedia...
- C ranks in the top 1% of languages
- the C wikipedia page
- C first appeared in 1972
- file extensions for C include c, cats, h, idc and Mono
- replit has an online C repl
- See also: cyclone, unified-parallel-c, split-c, cilk, b, bcpl, cpl, algol-68, assembly-language, pl-i, ampl, awk, c--, csharp, objective-c, d, go, java, javascript, julia, limbo, lpc, perl, php, pike, processing, python, rust, seed7, vala, verilog, unix, algol, swift, multics, unicode, fortran, pascal, mathematica, matlab, ch, smalltalk
- I have 136 facts about C. what would you like to know? email me and let me know how I can help.
Example code from Linguist:
#ifndef HELLO_H #define HELLO_H void hello(); #endif
Example code from Wikipedia:
#include <stdio.h> int main(void) { printf("hello, world\n"); }
Trending Repos
Last updated December 4th, 2019 | https://codelani.com/languages/c.html | CC-MAIN-2019-51 | refinedweb | 155 | 59.03 |
Choose QtQuick.Controls 1.4 for CheckBox
Hi all
I have qml window with TableView (QtQuick.Controls 1.4), but I want ToolTip also (QtQuick.Controls 2.0)
but I have few CheckBox and ComboBox components on my form too.
So question is possible choose CheckBox and ComboBox from QtQuick.Controls 1.4 to use on my form or no?
Qt 5.7.
thanks
You can use delegate in each of the columns of TableView
TableViewColumn{role : "showme";title :"Select";width:80;
delegate: CheckBox{}
}
hi @dheerendra
thank you for answer
yes I'm use delegate item, something like this
but problem that this CheckBox is from Controls 2.0 but I want CheckBox from Controls 1.4
is this possible?
Component { id: cbdDeploymentStatus CheckBox { // this check box from Controls 2.0 // but I want, Contorol 1.4 // is this possible? } }
Try something like this ?
import QtQuick 2.6
import QtQuick.Window 2.2
import QtQuick.Controls 2.0
import QtQuick.Controls 1.2 as Controls12
Window {
visible: true
width: 640
height: 480
title: qsTr("Hello World")
Controls12.CheckBox { text : "Hell20" } CheckBox{ x : 100 text : "Hell20" } Text { text: qsTr("Hello World") anchors.centerIn: parent }
}
hi @dheerendra thanks for answer
yep I try this
but take error "Invalid import qualifier ID" with such instruction
Did u try my sample. Did that work ?
hi @dheerendra
yep it's work!!! great
but I try this by myself yesterday, but nothing work))
are you wizard?))
thank you
@dheerendra said in Choose QtQuick.Controls 1.4 for CheckBox:
as Controls12
Alias should start with upper case
import QtQuick.Controls 1.4 as Controls12 // work
import QtQuick.Controls 1.4 as controls12 // not work | https://forum.qt.io/topic/75675/choose-qtquick-controls-1-4-for-checkbox | CC-MAIN-2020-05 | refinedweb | 277 | 70.5 |
"""Mixes in place, i.e. the base class is modified. Tags the class with a list of names of mixed members. """ assert not hasattr(base, '_mixed_') mixed = [] for item, val in addition.__dict__.items(): if not hasattr(base, item): setattr(base, item, val) mixed.append (item) base._mixed_ = mixeddef unMix (cla):
"""Undoes the effect of a mixin on a class. Removes all attributes that were mixed in -- so even if they have been redefined, they will be removed. """ for m in cla._mixed_: #_mixed_ must exist, or there was no mixin delattr(cla, m) del cla._mixed_def mixedIn (base, addition):
"""Same as mixIn, but returns a new class instead of modifying the base. """ class newClass: pass newClass.__dict__ = base.__dict__.copy() mixIn (newClass, addition) return newClass
def mixin (existingClass, mixinClass): for item, val in mixinClass.__dict__.items(): if not hasattr(existingclass, item): setattr(existingclass, item, val)This copies not just functions, but any class members. Example usage:
class addSubMixin: def add(self, value): return self.number + value def subtract(self, value): return self.number - value class myClass: def __init__(self, number): self.number = numberThen, at runtime, you can mix any class into any other with:
mixin(myClass, addSubMixin) myInstance = myClass(4) myInstance.add(2) myInstance.subtract(2)
class myClass(myClass, myMixin) passYou just choose the flavors and add them in. Wants forking? add ForkingMixIn. see also Twisted Python and wxPython for a lot of MixIn uses. --JuneKim | http://c2.com/cgi/wiki?MixinsForPython | CC-MAIN-2015-48 | refinedweb | 237 | 61.02 |
Make your points expressive
Being able to show individual data points is a powerful way to communicate. Being able to change their appearance can make the story they tell much richer.
A note on terminology: In Matplotlib, plotted points are called "markers". In plotting, "points" already refers to a unit of measure, so calling data points "markers" disambiguates them. Also, as we'll see, markers can be far richer than a dot, which earns them a more expressive name.
We'll get all set up and create a few data points to work with. If any part if this is confusing, take a quick look at why it's here.
import matplotlib matplotlib.use("agg") import matplotlib.pyplot as plt import numpy as np x = np.linspace(-1, 1) y = x + np.random.normal(size=x.size) fig = plt.figure() ax = fig.gca()
Change the size
ax.scatter(x, y, s=80)
Using the
s argument, you can set the size of
your markers, in points squared. If you want a marker 10 points
high, choose
s=100.
Make every marker a different size
The real power of the
scatter() function somes out when
we want to modify markers individually.
sizes = (np.random.sample(size=x.size) * 10) ** 2 ax.scatter(x, y, s=sizes)
Here we created an array of sizes, one for each marker.
Change the marker style
Sometimes a circle just doesn't set the right tone. Luckliy Matplotlib has you covered. There are dozens of options, plus the ability to create custom shapes of any type.
ax.scatter(x, y, marker="v")
Using the
marker argument and the right character code,
you can choose whichever style
you like. Here are a few of the common ones.
- ".": point
- "o": circle
- "s": square
- "^": triangle
- "v": upside down triangle
- "+": plus
- "x": X
Make multiple marker types
Having differently shaped markers is a great way to distinguish between different groups of data points. If your control group is all circles and your experimental group is all X's the difference pops out, even to colorblind viewers.
N = x.size // 3 ax.scatter(x[:N], y[:N], marker="o") ax.scatter(x[N: 2 * N], y[N: 2 * N], marker="x") ax.scatter(x[2 * N:], y[2 * N:], marker="s")
There's no way to specify multiple marker styles in a single
scatter() call, but we can separate our data out
into groups and plot each marker style separately. Here we chopped
our data up into three equal groups.
Change the color
Another great way to make your markers express your data story is by changing their color.
ax.scatter(x, y, c="orange")
The
c argument, together with any of the color names
(
as described in the post on lines) lets you change your
markers to whatever shade of the rainbow you like.
Change the color of each marker
If you want to get extra fancy, you can control the color of
each point individually. This is what makes
scatter()
special.
ax.scatter(x, y, c=x-y)
One way to go about this is to specify a set of numerical values for the color, one for each data point. Matplotlib automatically takes them and translates them to a nice color scale.
Make markers transparent
This is particularly useful when you have lots of overlapping markers and you would like to get a sense of their density.
x = np.linspace(-1, 1, num=1e5) y = x + np.random.normal(size=x.size)
To illustrate this, we first create a lot of data points.
ax.scatter(x, y, marker=".", alpha=.05, edgecolors="none")
Then by setting the
alpha argument to something small,
each individual point only contributes a small about of digital ink
to the picture. Only in places where lots of points overlap is the
result a solid color.
alpha=1 represents no transparency
and is the default.
The
edgecolors="none" is necessary to remove the marker
outlines. For some marker types at least, the
alpha
argument doesn't apply to the outlines, only the solid fill.
and even more...
If you are curious and want to explore all the other crazy things you can do with markers and scatterplots, check out the API.
We've only scratched the surface. Want to see what else you can change in your plot? Come take a look at the full set of tutorials. | https://e2eml.school/matplotlib_points.html | CC-MAIN-2020-16 | refinedweb | 732 | 66.84 |
0
Hi everyone.
Recently I have been working on an XOR encryptor/decryptor. I have looked at tutorials to get ideas. Now, I have created my encryptor/decryptor and I have absolutely messed it up i think, because the output is not right. What I am trying to do here is:
- Creating a string for the user to input the unencrypted text.
- Create a for loop.
- Get the length of the string by using string.length()
- Using Xor operator to "XOR" the string with the encryption key i made.
- Then create another string for the user to input the encrypted text.
- Use the same keys to "XOR" with the encrypted text to obtain
I will add a switch statement later.
Here is the code I've made, i do not know what to put in after
"^ encryption_key[]" in the for loop. I know what the [] is for though.
#include <iostream> #include <string> using namespace std; int main () { string decrypt_string; string input_string; char encryption_key[8] = "ABCDEFG"; cout << "Please enter the text you would like to encrypt below:"<<endl<<endl; getline (cin, input_string); cout<<endl<<endl<<"Below is the encrypted text:"<<endl; for(int i=0; i<=input_string.length(); i++) { input_string[i]=input_string[i] ^ encryption_key[]; // I do not know what to put in for the [] after encryption_key. cout<<input_string[i]; } cout<<endl<<endl; cin.get(); cout << "Please enter the text you would like to decrypt below:"<<endl<<endl; getline (cin, decrypt_string); decrypt_string.length(); cout<<endl<<endl<<"Below is the decrypted text:"<<endl; for(int y=0; y<=decrypt_string.length(); y++ ) { decrypt_string[y]=decrypt_string[y] ^ encryption_key[]; cout<<decrypt_string[y]; } cout<<endl<<endl; system("PAUSE"); return 0; }
Thanks. | https://www.daniweb.com/programming/software-development/threads/250584/my-xor-encryptor-decryptor | CC-MAIN-2016-50 | refinedweb | 274 | 66.84 |
You can subscribe to this list here.
Showing
2
results of 2
Dunno if anyone cares - but after *MONTHS* of asking, the SourceForge
sysadmins *finally* gave me control of the freeglut mailing list administration.
----------------------------- Steve Baker -------------------------------
HomeMail : <sjbaker1@...> WorkMail: <sjbaker@...>
Projects :
What are the outstanding problems with freeglut?
Things I know about (for mainly the X11 verion) are...
The soname. I think we should leave this one for a bit.
It crashes if the DISPLAY environment variable is not set.
The main loop never sleeps, it seems to really suck on those cycles
when it has nothing to do.
The modifier values differ from glut.
No key up events and missing special keys.
Some of the callbacks that should be global are per window.
Needs more geometrical shapes.
Stoke fonts.
Anyone know of anything else?
I'll get to fixing what's left of that at the weekend.
It's not really practical for me to do much during the week.
As well as those things there are some things I'm not too happy with...
The internal symbols pollute the application namespace. They are all in
the form of fgSomeThing. This is far too generic. They should at least
be prefixed with an underscore or two.
It uses OpenGL to render the menus. This can cause problems.
The code is full of assert()s. I just have a problem with assert().
The source tree could do with a bit of reorganization.
--
Christopher John Purnell | I thought I'd found a reason to live | Just like before when I was a child
--------------------------| Only to find that dreams made of sand
What gods do you pray to? | Would just fall apart and slip through my hands | http://sourceforge.net/p/freeglut/mailman/freeglut-developer/?viewmonth=200108 | CC-MAIN-2016-07 | refinedweb | 285 | 86.4 |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
We used to use sub-tasks but have decided against using them anymore. I moved about 5,000 sub-tasks to one sub-task named simply "sub-task". The reason why is because you cannot bulk change a sub-task to be an issue. So I am stuck with one "sub-task" within my issue type scheme. My question is this: is there a way to allow sub-tasks (I don't want to manually move them to be issues) but prevent users from creating new ones? I have about 300 users and they are very difficult to train. Thanks!
Hey Suzanne,
You can remove any subtask issue types from your issue type configuration scheme.
I should say I am looking for something within the workflow that would prevent users from creating sub-taaks.
There are some great suggestions from the user community around moving those sub tasks into issues:
If you can get this working then you can just disable sub-tasks natively.
In the sub-task workflow in the Create Issue transition put a custom script validator (using Script Runner plugin):
import com.opensymphony.workflow.InvalidInputException invalidInputException = new InvalidInputException("Subtasks cannot be created manually.")
Problem is that it will complain after user pressing 'Create' button, not. | https://community.atlassian.com/t5/Jira-questions/I-need-to-prevent-user-from-creating-sub-task/qaq-p/429398 | CC-MAIN-2018-34 | refinedweb | 230 | 54.83 |
Is there a method to revoke software/user certificate profiles via the APIs for FIMCM (CLM)?
There is an API method (externalSubmitDisableRequest) for smart-card profiles - but one of its inputs is the smart-card serial number - which doesnt apply to non smart-card certificate profiles.
You will have to use the provisioning API to write your own procedures.
Brian
Brian,
I dont immediately see where I can extend the calls of the Microsoft.CLM.Provision namespace. Can you point me at something I am missing?
If there were even a way to call a method that would update the status of a profile (to disabled) - I could then revoke the linked cert at the CA and then update FIMCM. As it stands, I am tempted to try just flipping the status flag on the profile
directly in the DB.
Michael
Did anyone find a solution for this? I have exactly the same problem. I can use the Microsoft.Clm.Provision.RequestOperations and Microsoft.Clm.Provision.ExecuteOperations classes to retire a smartcard. However there does not seem to be a method to
revoke Profiles (soft tokens).
Frank
Microsoft is conducting an online survey to understand your opinion of the Technet Web site. If you choose to participate, the online survey will be presented to you when you leave the Technet Web site.
Would you like to participate? | https://social.technet.microsoft.com/Forums/en-US/2bdc3f97-b9d4-4236-830e-c8295a77d2bc/method-to-revoke-software-certificate-profiles-in-the-fimcm-api?forum=identitylifecyclemanager | CC-MAIN-2017-47 | refinedweb | 227 | 66.33 |
Ok.. I have alot of experience with php and html ect.
I knew some c++ but never really gain experience and haven't practice in a while since I been working on a website.
I would like someone to explain to me what classes are used for. I did read what classes are on here bu still confused.
I am guessing class are ways to organize code. Like for example I make a class house.
this would mean any function in class house would be all functions and code for a house.
like door () would be open close lock ect.
am I somewhat right that class is like organizing code so if I were making a computer game I can have the house models and code to be under a house class??
I would like also someone to explain me about namespaces.
explain to me by just giving a example that would show a picture of what it's uses are for. | https://cboard.cprogramming.com/cplusplus-programming/106665-need-help-wiht-using-classes-namespaces-printable-thread.html | CC-MAIN-2017-04 | refinedweb | 162 | 82.14 |
91104/how-to-read-a-dataframe-based-on-an-avro-schema
I am unable to import from_avro in Pyspark. Trying to run a spark-submit job by invoking the external package for Avro.
Eg: spark-submit --packages org.apache.spark:spark-avro_2.12:3.0.1 test1.py
My test1.py file contains the import statement
from pyspark.sql.avro.functions import from_avro, to_avro
Getting ImportError: NO module names avro.function
Please help!!! Need to import from_avro using python code
Hi,
I am able to understand your requirement. But your error says you don't have the avro.function package in your system. First, check all the available packages in your Spark.
Hi@ana,
While working with spark-shell, you can also use --packages to add spark-avro_2.12 and its dependencies directly.
$ ./bin/spark-shell --packages org.apache.spark:spark-avro_2.12:2.4.4
we should use DataSource format as “avro” or “org.apache.spark.sql.avro” and load() is used to read the Avro file.
$ val personDF= spark.read.format("avro").load("person.avro")
Use the function as following:
var notFollowingList=List(9.8,7,6,3 ...READ MORE
Hey,
You can try this:
from pyspark import SparkContext
SparkContext.stop(sc)
sc ...READ MORE
Source tags are different:
{ x : [
{ ...READ MORE
Hi@akhtar,
When we try to retrieve the data ...READ MORE
Instead of spliting on '\n'. You should ...READ MORE
Hey there!
You can use the select method of the ...READ MORE
When you concatenate any string with a ...READ MORE
spark do not have any concept of ...READ MORE
Hi@Manas,
You can read your dataset from CSV ...READ MORE
Hi@akhtar,
Since Avro library is external to Spark, ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/91104/how-to-read-a-dataframe-based-on-an-avro-schema | CC-MAIN-2021-39 | refinedweb | 310 | 71.41 |
Hi Rick,
thank you for your reply. Sorry for not being clearer. The load is
succeeding even without the derby.ui.codeset property being set. However,
when selecting the data in ij afterwards, some of the special characters do
not display as intended.
So there is some sort of encoding/conversion mismatch happening during
import of the SQL file. Surely there is a way to avoid this so I'm trying to
figure out what the correct encoding for the SQL file is to prevent such
problems without having the specify the derby.ui.codeset property.
Greetings/Thanks
Robert
--
View this message in context:
Sent from the Apache Derby Users mailing list archive at Nabble.com. | http://mail-archives.apache.org/mod_mbox/db-derby-user/201303.mbox/%3C1364318390198-128395.post@n7.nabble.com%3E | CC-MAIN-2014-10 | refinedweb | 116 | 66.23 |
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
Core Java Interview Questions Page to a program for a specific type at runtime. There are two type of binding first
Hi..
Hi.. what are the steps mandatory to develop a simple java program?
To develop a Java program following steps must be followed by a Java developer :
First of all the JDK (Java
Development Kit) must be available
hi
hi what are the steps mandatory to develop a simple java program?
what is default value for int type of local variable?
what are the keywords available in simple HelloWorld program?
Class is a blueprint of similiar objects(True
design patterns are there in core java?
which are useful in threads?what r the names of these design patterns?
ok thank you (inadvance). Hi... for more information:
corejava - Java Beginners
corejava code for converting the numer into character(for ex;if we enter 1 it will comes in words like one)? Hi friend,
import... :
corejava - Java Interview Questions
corejava how to validate the date field in Java Script? Hi friend,
date validation in javascript
var dtCh= "/";
var minYear... for more information.
Thanks
Hi...doubt on Packages - Java Beginners
Package in java..
I have downloaded one program on Password Authentication... ..Explain me. Hi friend,
Package javax.mail
The Java Mail API allows... and send from within your java programs.
The JavaMailTM API provides classes
Core Java Interview Question Page 1
;
Question:
How could Java classes direct program messages to the system console, but error messages, say to a file... classes, you have to inherit your class from it and Java does not allow multiple
classes and data abstraction - Java Beginners
classes and data abstraction Create a java program for a class named... will be part of another output messages and needs to begin and end with a blank character. Use the following messages:
If currencyType is 'd', output the string
hi
hi how we direct page after cheching one checkbox and press login button
hi
hi how we direct page after checking one checkbox and press login button
Hi..Date Program - Java Beginners
Hi..Date Program Hi Friend...thank for ur Valuable information...
Its displaying the total number of days..
but i WANT TO DISPLAY THE NUMBER OF DAYS BY EACH MONTH...
Hi friend,
Code to solve the problem
Classes in Java
Classes in Java
... in Java. The exceptions that occur in the program can be caught using try... of the
program. An exception is an event that occurs and interrupts
Inner classes
Inner classes Hi I am bharat . I am student learning java course . I have one question about inner classes . question is how to access the instance method of
Non-static classes which is defined in the outer
Hi..
Hi.. what are the keywords available in simple HelloWorld program
hi all - Java Beginners
hi all hi,
i need interview questions of the java asap can u please sendme to my mail
Hi,
Hope you didnt have this eBook. You.../Good_java_j2ee_interview_questions.html?s=1
Regards,
Prasanth HI
Hi..
Hi.. what are access specifier available in java
hi
hi I have connected mysql with jsp in linux and i have used JDBC connectivity but when i run the program, its not working the program is displaying
Hi..
Hi.. null is a keyword.True/False?
hi friend,
In Java true, false and null are not a Java keyword they are literals in Java. For reading in detail go through the following link The null keyword
java classes. - Java Beginners
java classes. I con not understand the behavior of anonymous inner classes?
Hi friend,
I am sending you a link. This link will help you.
Please for more information.
hi
hi My program is not find java.io.File; why? help me please
write a java program that implements the following classes:
write a java program that implements the following classes: write a java program that implements the following classes:
A)
are subclasses of the circle class. All these classes should
hi
hi servlet program to retrieve data from database using session object
hi
hi what are access modifiers available in java
logic for prime number Logic for prime number in Java. HTML Tutorials
CoreJava - Java Beginners
core java an integrated approach I need helpful reference in Core Java an integrated approach
corejava - Java Interview Questions
corejava how can we make a narmal java class in to singleton class
Classes in java
etc.
For more details click on the following link
Classes in Java... the objects are direct interacted with its class that mean almost all
corejava - Java Interview Questions
corejava - Java Beginners
What is Abstract classes in Java?
What is Abstract classes in Java? What is Abstrack class in Java...,
Hi,
In Java programming language we are used the Java Abstract.... That's why the Abstract class in Java programming language is used to provide
hi roseindia - Java Beginners
hi roseindia what is class? Hi Deepthi,
Whatever we... are direct interacted with its class that mean almost all the properties of the object... information about class with example at:
creating java classes
creating java classes Create a Java class that can be used to store inventory information about a book. Your class should store the book title... a program that tests your class by creating and using at least two objects of the class
hi Friend... - Java Beginners
plz Explain this...Thank u..
Sakthi
Hi friend,
Java IO :
The Java Input/Output (I/O) is a part of java.io package. The java.io package...hi Friend... Hi friend...
I have to import only
creating java classes
creating java classes This program uses a class named DrivingLicense... program to ensure that it generates the following output.
Alice does NOT have... license
/*
Class: DLTest.java
Description:Test program
corejava
CoreJava
hi sir - Java Beginners
hi sir Hi,sir,i am try in netbeans for to develop the swings,plz provide details about the how to run a program in netbeans and details about... the details sir,
Thanks for ur coporation sir Hi Friend
Java Calculator Program
Java Calculator Program Hi, so I need to make a program that "works... Class and need to implement the children classes, which are Number, Product, Sum... two messages: the toString() method which will provide a String representation
Nested and Inner classes
Nested and Inner classes What is the significance of Inner Classes and Static Inner Classes?
or Why are nested classes used?
Hi Friend... nested classes because of the following reasons:
It is a way of logically
HI - Java Beginners
HI how i make a program if i take 2 dimensional array in same program & add each row & column & make other dimensional array by the help of switch case & also i do subtraction & search dialognal element in this. Hi
Exception Classes
Exception Classes
The hierarchy of exception classes commence from Throwable
class which is the base class for an entire family of exception classes, declared
in
hi - Java Beginners
hi hi sir,Thanks for ur coporation,
i am save the 1 image... (already saved image based on the customer data previously),plz provide program sir,plzzzzzzzzzzzz Hi Friend,
Please provide some more conducted online by Roseindia include an elite panel of some... that if a beginner starts taking a Java classes online
here, than he/she at the completion of the program becomes a Java professional
and at later stage becomes
hi - Java Beginners
hi hi sir,good afternoon,
i want to add a row in jtable when i am pressing the enter key,and that row is available to insert the data plz give the program sir,urgent
Thank u Hi Friend,
Try
Hi ...CHECK - Java Beginners
Hi ...CHECK Hi Da..sakthi Here
RUN THIS CODE
-----
package... ");
Screen.showMessage(" DON WORRY WE'ILL ROCK ");
}
}
}
Hi frend,
Plz explain about the two classes which not exist in this code
Direct Web Remoting
Direct Web Remoting
Direct Web Remoting is a framework for calling Java methods directly from
Javascript code, Like SAJAX, can pass calls from Javascript into Java methods
and back,thanks for providing the datepicker program
but i want... class when i am want,in that type of flexibility plz provide the program sir,in my... in more no.of programs,how to call the total program of datepicker in my tabs
hi - Java Beginners
hi hi sir,Thanks for ur coporation,
i am save the 1 image... saved image based on the customer data previously),plz provide program sir... that image is rendered to my frame,that's my question Hi Friend
hi - Java Beginners
hi how we make a program of final variable,final method,final class.plz give the full coding public final class FinalExample {// final class
public static final int finalVariable = 10; // Final variable
hi - Java Beginners
hi hi sir,when i am add a jtable record to the database by using the submit button,then nullpointer exception is arised,plz see this program and resolve this sir,plzzzzzz
submit=new JButton("SUBMIT.
Abstract Classes - Java Interview Questions
Abstract Classes Why we cann't instantiate a Abstract Classes? Even.... Hi friend,
public class AbstractExam {
public static void...://
Thanks
(Roseindia Team
Java classes
Java classes Which class is extended by all other classes
Java Program - Java Beginners
Java Program Hi I have this program I cant figure out.
Write a program called DayGui.java that creates a GUI having the following properties...
Caption Messages
Layout FlowLayout
JButton Name
Distinguishes JavaBeans from Java Classes
Distinguishes JavaBeans from Java Classes What is the difference between a Java Bean and an instance of a normal Java class?Explain with an example, pls?
Hi Friend,
Differences:
1)The java beans are serializable
program
program explanation of program on extending thread class
Hi Friend,
Please go through the following link:
Java Threads
Thanks
What is the base class of all classes?
What is the base class of all classes? Hi,
What is the base class of all classes?
thanks
Hi,
The base class is nothing but the Abstract class of Java programming. We uses the syntax for base class of all Program - Java Beginners
Java Program Hi I have this program I cant figure out.
Write a program called DayGui.java that creates a GUI having the following properties.... Messages, and FlowLayout
Object- JButton
Property- Name, Caption, Mnemonic
Interfaces and Abstract Classes - Development process
Interface and Abstract Classes? Hi Friend,
Interface:
Java does... by using the interface.
Interfaces are useful when you do not want classes to inherit from unrelated classes just to get the required functionality
hi plzz reply
hi plzz reply in our program of Java where we r using the concept of abstraction Plz reply i m trying to learn java ...
means in language of coding we r not using abstraction this will be used only for making ideas
how to create classes for lift program
how to create classes for lift program i would like to know creating classes for lift program
java program - Java Beginners
java program Take in 7 numbers as command line arguments and store... they were entered. Proper error messages should be displayed if:
i) command... in
iii) If one of the arguments is not a valid number
Hi friend,
Code
java program - Java Beginners
java program i have two classes like schema and table. schema class... name, catelogue,columns, primarykeys, foreignkeys. so i need to write 2 java classes one for schema and another for table, with appropriate methods which
Java classes
Java classes What is the Properties class | http://www.roseindia.net/tutorialhelp/comment/94789 | CC-MAIN-2013-48 | refinedweb | 1,918 | 54.52 |
Python for their domain name, URI, or parameters.
- urllib.error is used to handle exceptions.
- urllib.robotparser is used to parse
robot.txtfiles.
urllib.request
The
urllib.request module used to open specified URLs without the UI or browser. The URL is provided to the
urlopen() metod like below.
import urllib.request request_url = urllib.request.urlopen(' print(request_url.read())
urllib.parse
The
urllib.parse module is used to parse and manupilate the URL for its differet parts. A tipical URL consist of scheme, netlocation, path, parameters, query and fragment.
import urllib.parse url = " parsed_url = urllib.parse.urlparse(url) print(parsed_url)
The urllib.parse also provides other methods like below which can be used to parse or split the URLs.
urllib.error
Sometimes URL related methods may provides errors and exceptions. The
urllib.error is used to handle and manage these errors and eceptions. There are two main and most popular error and exception types named
URLError and
HTTPError .
URLError is raised when error occurs during fething of the URL for connectivity etc.
HTTPError is raised for HTTP related errors which are rare and different from commo errors. The HTTPError is subclass or subtype of the URLError.
urllib.robotparser
Web sites provides the
robot.txt or robot files in order to provide information and instrcutions for the web scarappers. The robot file is created manually or automatically and provides paths or URLs for the web sites.
import urllib.robotparser robot = urllib.robotparser.RobotFileParser() x = robot.set_url(' a=robot.read() print(a) | https://pythontect.com/python-urllib-module-tutorial/ | CC-MAIN-2022-21 | refinedweb | 250 | 53.68 |
Opened 9 years ago
Closed 9 years ago
Last modified 3 years ago
#6457 closed Uncategorized (worksforme)
Tutorial sample code: change max_length to maxlength
Description
While working through the tutorial, I reached the section where you modify the polls/models.py file.
I got a max_length error (running development code (rev 7028)):
>python manage.py sql polls mysite.polls: __init__() got an unexpected keyword argument 'max_length' 1 error found. 1620, in execute_from_command_line mod_list = [models.get_app(app_label) for app_label in args[1:]] File "c:\Python25\lib\site-packages\django\db\models\loading.py", line 40, in get_app mod = load_app(app_name) File "c:\Python25\lib\site-packages\django\db\models\loading.py", line 51, in load_app mod = __import__(app_name, {}, {}, ['models']) File "C:\Documents and Settings\usner\Desktop\django\mysite\..\mysite\polls\mo dels.py", line 3, in <module> class Poll(models.Model): File "C:\Documents and Settings\usner\Desktop\django\mysite\..\mysite\polls\mo dels.py", line 4, in Poll question = models.CharField(max_length=200) TypeError: __init__() got an unexpected keyword argument 'max_length'
After removing 'max_length=' from the code to see what happened, I received the following error:
>python manage.py sql polls polls.poll: "question": CharFields require a "maxlength" attribute. polls.choice: "choice": CharFields require a "maxlength" attribute. 2 errors found.
The tutorial still has the underscore in max_length in the sample code.
After removing it, it worked.
Solution:
Make the following change to the sample code (max_length -> maxlength):
from django.db import models class Poll(models.Model): question = models.CharField(maxlength=200) pub_date = models.DateTimeField('date published') class Choice(models.Model): poll = models.ForeignKey(Poll) choice = models.CharField(maxlength=200) votes = models.IntegerField()
Thanks,
Mike
Change History (3)
comment:1 Changed 9 years ago by
comment:2 Changed 9 years ago by
You are not actually using revision 7028; if you were you would not be receiving this error. Remove any older versions of Django you might have had installed and try again; if you continue to experience problems post a question to the django-suers mailing lists, which provides support for these sorts of issues.
comment:3 Changed 3 years ago by
none of them seems to work for me! django 1.6 and python 2.7.5
Are you sure the Django version you are using (i.e. the one at
c:\Python25\lib\site-packages\django\) is actually SVN r7028 and not an older one? the conversion from
maxlengthto
max_lengthwas commited on [5803] | https://code.djangoproject.com/ticket/6457 | CC-MAIN-2017-04 | refinedweb | 406 | 53.07 |
Hi,
Thanks for the link but I've already got the resources I need in order to revise : ). I was mainly looking for advice about Open Book Programming/Java exams. However, thanks for finding me the...
Hi,
Thanks for the link but I've already got the resources I need in order to revise : ). I was mainly looking for advice about Open Book Programming/Java exams. However, thanks for finding me the...
Sorry if this post is off topic, i'm not too sure but thought i'd post it anyway! if needs be you can delete this. But, I am just looking for some advice from programmers : ) this seemed like the...
Assessed work
I want to learn from what i'm doing, so please no answers/spoilers.
I'm required to create a program that finds the minimum value in an Array of integers.
Here is my approach :...
Exception in thread "main" java.lang.NullPointerException
at PizzaChoice.main(PizzaChoice.java:150)
and if (bacon.getType().equals("bacon")||(prawn.getType().equals("prawn"))) {
sorry about...
Do you know why
boolean vegetarian = true;
if (bacon.getType().equals("bacon")||(prawn.getType().equals("prawn"))) {
vegetarian = false;
System.out.println("This...
Well i showed the code in a previous post, i've went over and over and they're definitely the same
Yeah but the compiler is telling me that "The method geteggPrawn() is undefined for the type Pizza" yet it works for the other without an error, even though i don't have get method in pizza, it's in...
dddddddddddddddddddddddddddddd
I've already got this down at the moment, but different object name. Can you show an example of using pm with method, please?
I know how to create a class =/ i mean i don't know how to test each class, just call the class?
How can i do my testing class =/ cheers
Is it ok that when you've looked at it, and if the problem is resolved. Can i edit my post and remove the code? don't want others stealing you see..
It won't allow me to do that, receive an error message "Implicit super constructor PizzaBase() is undefined for default constructor. Must define an explicit constructor"
The method is defined in the PizzaBase class, so how can i define it in both, i'm required to use extend on pizza because it contains all the toppings that i'll need
I kinda just copied the way i done with my other getters and setters. So in my Pizza class the only thing i have related to this is
static public double MargheritaCost;
Are you sure i don't it correctly ? sorry i'm just becoming really aggravated, i haven't progressed in 3 days and it needs to be done for friday. Becoming worried ;p
I've deleted the others, but it's definitely the correct one
I may sound like an idiot, but how do you check =/
The method setMargheritaCost(double) is undefined for the type Pizza
I'm using extends
public class PizzaMenu extends Pizza {
void Margherita ()
{
Pizza p=new Pizza();
I'm trying to try and set the cost of the pizza, in one class i have :
public double MargheritaCost;
public void setMargheritaCost(double MargheritaCost) {
this.MargheritaCost =...
Basically, it's a pizza, Pre-defined Pizza e.g. margarita so i've got topping objects, base ect but i need to somehow combine them to make a 'predefined' pizza
Someone suggested a hashMap, could this work?
Thanks a lot. Do you mind if i send you a pm with my current code, because the structure is extremely distinctive; and it's quite difficult trying to follow this with my structure. Since it's a...
/*
*/
import java.util.ArrayList;
import java.util.List;
import java.text.DecimalFormat;
public class Pizza
{
//Cost and name | http://www.javaprogrammingforums.com/search.php?s=1a49b4f5f349a1cab780691b7c76523e&searchid=836878 | CC-MAIN-2014-15 | refinedweb | 637 | 64.71 |
For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See:
Bug? `AttributeError: 'Cerebro' object has no attribute '_exactbars'`
- A Former User last edited by
import backtrader as bt cb = bt.Cerebro() cb.run() cb.plot()
Results in:
Traceback (most recent call last): File "src/main.py", line 5, in <module> cb.plot() File "/home/sohail/src/trading/build/venv/lib/python3.6/site-packages/backtrader/cerebro.py", line 970, in plot if self._exactbars > 0: AttributeError: 'Cerebro' object has no attribute '_exactbars'
- Paska Houso last edited by
@Cheez said in Bug? `AttributeError: 'Cerebro' object has no attribute '_exactbars'`:
import backtrader as bt
cb = bt.Cerebro() cb.run() cb.plot()
This was mentioned a long time ago which a quick google search has uncovered:
You need to a bit more ... like adding a data feed. | https://community.backtrader.com/topic/767/bug-attributeerror-cerebro-object-has-no-attribute-_exactbars/2 | CC-MAIN-2021-43 | refinedweb | 144 | 59.3 |
Get
the latest Cisco news in this December issue of the Cisco
Small Business Monthly Newsletter
Cisco UC520 trying to bind a specific IP to a host, here's the config snip:
ip dhcp relay information trust-all
ip dhcp excluded-address 10.1.1.241 10.1.1.255
ip dhcp excluded-address 192.168.10.241 192.168.10.255
ip dhcp excluded-address 10.1.1.1 10.1.1.15
ip dhcp excluded-address 192.168.10.1 192.168.10.15
!
ip dhcp pool phone
network 10.1.1.0 255.255.255.0
default-router 10.1.1.1
option 150 ip 10.1.1.1
ip dhcp pool data
import all
network 192.168.10.0 255.255.255.0
default-router 192.168.10.1
ip dhcp pool CHADWICK
host 192.168.10.2 255.255.255.0
hardware-address 0100.1601.ed0b.e9
client-name CHADWICK
ip dhcp pool BROTHER
host 192.168.10.3 255.255.255.0
hardware-address 0190.4ce5.4b47.d6
client-name BRW904CE54B47D6
ip dhcp pool NAS
host 192.168.10.11 255.255.255.0
hardware-address 0008.9bc8.8b9d
client-name NAS
Here's the error I keep seeing when debuging dhcp events:
000103: Nov 8 04:26:02.039: DHCPD: Seeing if there is an internally specified pool class:
000104: Nov 8 04:26:02.039: DHCPD: htype 1 chaddr 0008.9bc8.8b9d
000105: Nov 8 04:26:02.039: DHCPD: remote id 020a0000c0a80a0101050001
000106: Nov 8 04:26:02.039: DHCPD: circuit id 00000000
I've played around with for an hour and I can't figure out why it will not give that host the IP address I've set. I'm at a lost.
david
In case anyone has this issue, the problem is that you can't use hardware-address for Windows or other non bootp devices. So use client-identifier . | https://community.cisco.com/t5/voice-systems/can-t-bind-ip-to-host/m-p/3272581 | CC-MAIN-2020-05 | refinedweb | 322 | 79.26 |
Quiver Plots in Python
How to make a quiver plot in Python. A quiver plot displays velocity vectors a arrows..
import plotly.figure_factory as ff import numpy as np x,y = np.meshgrid(np.arange(0, 2, .2), np.arange(0, 2, .2)) u = np.cos(x)*y v = np.sin(x)*y fig = ff.create_quiver(x, y, u, v) fig.show()
import plotly.figure_factory as ff import plotly.graph_objects as go import numpy as np x,y = np.meshgrid(np.arange(-2, 2, .2), np.arange(-2, 2, .25)) z = x*np.exp(-x**2 - y**2) v, u = np.gradient(z, .2, .2) # Create quiver figure fig = ff.create_quiver(x, y, u, v, scale=.25, arrow_scale=.4, name='quiver', line_width=1) # Add points to figure fig.add_trace(go.Scatter(x=[-.7, .75], y=[0,0], mode='markers', marker_size=12, name='points')) fig.show()
See also¶
Cone plot for the 3D equivalent of quiver plots.
Reference¶
For more info on
ff.create_quiver(), see the full function reference<< | https://plotly.com/python/quiver-plots/ | CC-MAIN-2020-34 | refinedweb | 168 | 73.64 |
Hi my new company wont let me install any packages for dynamo because its blocked on my computer from installing anything. My question is how do I set worksets to all elements in a view, if I cant use any external packages? I got pretty close but it puked on me because worksets cant take strings apparently.
First, tell your company to install all the packages to a single networked location, and every time you want one they have 24 hours to review and install. Anything longer than that is insane and is an IT department overstepping its bounds in order to feel important and justify their existence. If the company lets them do this regularly than I’d start to look for another new job.
Second, for the most part packages aren’t software, they are files which can be copied into place if you have write access to the folder. They may have some code in them, but if you have python installed and dynamo installed you don’t need anything else other than the files. Download them to the correct location and you should be good to go. If you can’t write to the default folder, then try adding a “dynamo packages” directory to your documents folder and save them there. If you can’t write there then just walk out cuz that’s insane.
If you really can’t make any packages work, try to feed the workset element, not the name. Might not work as I haven’t tested this before.
Lastly, if your IT department wants some more input on why this isn’t an issue, have the director (or whoever is holding this back) start another “security concerns” post with their account so we can correct any misinformation, or so things can be run up the ladder, and the dev team can address any legitimate security concerns directly. They have already done quite a bit towards that end (“The package you are downloading contains python scripts” made me chuckle the first time I saw it.).
Jacob,
I appreciate that you have the same views as me about my IT department, but there is too much red tape there to solve my problem. But I have a couple quick questions for your. If I have dynamo installed dont I already have python? or is it a separate package all together? If it is, I dont think I can get it. With that being said. How do I actually download the packages outside of dynamo to actually paste them into the folder like you suggest? any exe or msi file or dynamo package gets rejected on my computer. Lastly I read elsewhere that worksets are an element, that I can find, but I just dont know where to find them. Do you? I checked in Categories, and Built In parameters, but I cant find them. How do I flush them out? Thanks so much Jacob!
Most direct downloads shouldn’t be an issue. Lady bug and a few others may have issues.
You should already have Python, try adding a Python node to your workspace. If you don’t than bigger issues are present.
Not sure how to find wirksets without custom nodes. Worse case scenario you can rebuild them as nodes in your workspace. Time consuming and slow though.
@mix
Go to dynamopackages.com and download from there (its all zip-files).
Create a folder and unzip it there.
In Dynamo go to settings and add the path to the newly created folder.
Exit Dynamo and start it again.
A carpenter without a hammer cant do his job.
Marcel
Good analogy @Marcel_Rijsmus, but is more like a carpenter without a power tool to those who use it. Those skill saws allow for cleaner cuts in way less time you know?
If it had added cost I would at least be willing to hear the argument, but it’s free… oh well more work and higher profit margins for those of us who utilize modern tools.
Thank Marcel, but I tried to restart dynamo and it didnt take. I added the path in dynamo settings and restarted but it didnt seem to take effect. I have a couple questions. Does each package that I download need to be added to its own folder named accordingly or how exactly can I consolidate these packages into one directory? thanks for your help!
Have them all have their own folder in the newly created folder and address only the top folder.
Dynamo will read the subfolders.
Marcel
@JacobSmall
Indeed, my boss loves it when i do the work of a week in 10 minutes.
We need guys like you, were inundated with new projects.
Ok thanks, Ill try that.
@Marcel_Rijsmus Yeah, my last job was like that. I could do 5 peoples jobs. But when you chop my favorite work horse, off at the knees I’m kind of powerless. lol. Is that a job offer? lol
We are hiring.
hello,
I tried to do what you said, but I am not getting my packages to show up. Only spring nodes shows up for some reason, Can you tell me what I am doing wrong please? I created subfolders titled the same as the package and put it into the directory I created, and extracted all the packages to its corresponding folder, but its not taking. Please advise. thanks
Hi @mix
It work best if you only have one folder for all packages, and better still, if it is on a network drive, so all your collegues can acces it too,and run the scripts you create for them.
The only reason i can think of why its reading only the SpringNodes package is that there may be .DLL files in the other folders which are blocked by your IT department.
Hello, I do have only one folder for all packages dont I? All my packages where put into the “Dynamo Packages” folder above, unless you mean that I should only have one directory to the packages? Do the individual package folders need to be spelled a specific way in order for dynamo to read them properly? Edit: I think you might be right on the .dll thing. When I hit he “add” button, and go to import library, it want a .dll file. and when I try to load one from the folder I get this error message
You can get workset names and ids from Python. The id is the actual parameter value you need to set an element’s workset.
import clr
clr.AddReference('ProtoGeometry')
from Autodesk.DesignScript.Geometry import *
# Import DocumentManager
clr.AddReference("RevitServices")
import RevitServices
from RevitServices.Persistence import DocumentManager
# Import RevitAPI
clr.AddReference("RevitAPI")
import Autodesk
# Import ToDSType(bool) extension method
clr.AddReference("RevitNodes")
import Revit
from Autodesk.Revit.DB import *
clr.ImportExtensions(Revit.Elements)
#The inputs to this node will be stored as a list in the IN variable.
#doc = DocumentManager.Instance.CurrentDBDocument
uiapp = DocumentManager.Instance.CurrentUIApplication
app = uiapp.Application
doc = DocumentManager.Instance.CurrentDBDocument
#create workset collector
userWorksets = FilteredWorksetCollector(doc).OfKind(WorksetKind.UserWorkset)
names, ids = [], []
for i in userWorksets:
names.append(i.Name)
ids.append(i.Id.IntegerValue)
#Assign your output to the OUT variable
OUT = names, ids
Thanks Nick! that script works perfectly,
Now how do I plug the id element into the element set parameter by name? do i have to convert the IDs to something somehow?
The python node exports both the workset name (index 0) and the workset ID (index 1), so you need to first get the item at index 1, then the item at index 132.
Looks like this: | https://forum.dynamobim.com/t/how-to-set-worksets-without-using-packages/13866 | CC-MAIN-2017-43 | refinedweb | 1,275 | 66.13 |
#include <openssl/ssl.h> void SSL_set_bio(SSL *ssl, BIO *rbio, BIO *wbio); void SSL_set0_rbio(SSL *s, BIO *rbio); void SSL_set0_wbio(SSL *s, BIO *wbio);
SSL_set0_wbio() works in the same as SSL_set0_rbio() except that it connects the BIO wbio for the write operations of the ssl object. Note that if the rbio and wbio are the same then SSL_set0_rbio() and SSL_set0_wbio() each take ownership of one reference. Therefore, it may be necessary to increment the number of references available using BIO_up_ref(3) before calling the set0 functions.
SSL_set_bio() is similar to SSL_set0_rbio() and SSL_set0_wbio() except that it connects both the rbio and the wbio at the same time, and transfers the ownership of rbio and wbio to ssl according to the following set of rules:
Because of this complexity, this function should be avoided; use SSL_set0_rbio() and SSL_set0_wbio() instead.
Licensed under the OpenSSL license (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at <>. | https://man.omnios.org/man3/SSL_set_bio | CC-MAIN-2022-33 | refinedweb | 172 | 55.88 |
i have already finished this exercise but i was wondering if i can change the whole function by a function
Changing the functionality of a function
I'm not sure what do you mean.
Why would you make a whole new function to change another one, when you can just rewrite the code ?
Hi, @letmeprogramyou1 ,
While the following is not a very useful example, it is offered to demonstrate that a function can redefine another function. Be careful, though, for changing the definition of a function in the midst of program execution can easily lead to buggy code.
def func(): # original func return "I am the original func function." def redefine_func(): # Make the name func global; redefine the func function global func def func(): return "I am the redefined func function." return "I redefined the func function." # Call func print(func()) # Call the function that redefines func print(redefine_func()) # Call func again; see that it has changed print(func())
Output ...
I am the original func function. I redefined the func function. I am the redefined func function. | https://discuss.codecademy.com/t/changing-the-functionality-of-a-function/49990 | CC-MAIN-2018-51 | refinedweb | 177 | 64 |
CGTalk
>
Development and Hardware
>
Graphics Programming
> Bat file question...
PDA
View Full Version :
Bat file question...
kyphur
03-18-2004, 08:23 PM
We just rendered out a crapload of frames but now I have a problem. The cell padding is at the front and back and I'm trying to write a bat file just to remove the first 4 digits. I've gotten it to remove the 4 digit extension with the dot (.jpg) and everything up to the .jpg. I just can't get it to take out the first four characters. Anyone that can give me a real quick solution to this would be rewarded with wine, women and dance!
Thanks,
Kyph
etni3s
03-18-2004, 08:25 PM
It would be rally helpful to see how you have tried to solve the problem, so posting a code-snippet would be great ;)
kyphur
03-18-2004, 09:00 PM
We tried multiple variations on this:
rename 0*_*.* *.*
This was to take the first 3 digits and the underscore off and then just leave the left overs. Well, it doesn't work that way. We found that it HAS to have something to replace with whatever it's taking away for some weird reason with this convention. I think with us removing the file extension and the everything but the file extension it was just us slamming it with something it couldn't understand and it was trying to make due.
Kyph
playmesumch00ns
03-19-2004, 09:04 AM
Windows batch files are the poorest excuse for a scripting system. Ever.
You may want to do it using the tools you've got already, but I'd highly advise getting python. You can do this and a million more things much more easily, and with a much prettier syntax:
import os
for filename in os.listdir( '.' ):
rename( filename, filename[4:] )
running that in the current directory will rename everything to its current filename, minus the first four characters.
For an even more pleasant scripting experience, get cygwin as well.
kyphur
03-19-2004, 01:49 PM
Yeah, I would like to use something more versitile but it's really difficult for us to install stuff here at work. We have NO ADMINISTRATIVE privelidges whatsoever and from my understanding we need to install Python in order to utilize it right?
I've never messed with Python so I don't know if we need the modules to run the script you just posted. I just went to this place () to read a quick description and I did download the module from Python.org.
Anyway,
Kyph
kyphur
03-19-2004, 02:44 PM
O.K. We were able to install it anyways on two of our comps. Do we need to compile the script and execute within the directory we want to have the files renamed?
kyph
playmesumch00ns
03-19-2004, 02:58 PM
you just need to make sure you have the bin directory of the python installation in your path, then do
python <scriptname>
in the directory where the jpgs are.
mattbolton
03-19-2004, 03:00 PM
There are several freeware utilities out there that perform this type of task. I don't remember the name of the one I have used in the past, but here is one I found this morning.. | http://forums.cgsociety.org/archive/index.php/t-131192.html | CC-MAIN-2014-10 | refinedweb | 558 | 78.18 |
I am attempting to solve a challenge, but I have hit a roadblock. I am a beginner programmer attempting to add tens of thousands of numbers. If I wait long enough, my program can easily yield the correct sum, however, I am looking for a more efficient method.
What is an efficient method for adding thousands of numbers quickly?
Side note: I have been reading about modular arithmetic, but I cannot quite wrap my head around it. Not sure if that could be useful for this situation.
I am attempting to get the sum of every prime number below 2 000 000. Here is my code so far:
public class Problem10 {
public static void main (String[] args) {
long sum = 0L;
for(long i = 1L; i < 2000000; i++) {
if(isPrimeNumber((int)i)) {
sum += i;
}
}
System.out.println(sum);
}
public static boolean isPrimeNumber(int i) {
int factors = 0;
int j = 1;
while (j <= i) {
if (i % j == 0) {
factors++;
}
j++;
}
return (factors == 2);
}
}
You can replace your
isPrimeNumber() method with this to speed it up substantially.
public static boolean isPrimeNumber(int i) { if (i==2) return true; if (i==3) return true; if (i%2==0) return false; if (i%3==0) return false; int j = 5; int k = 2; while (j * j <= i) { if (i % j == 0) return false; j += k ; k = 6 - k; } return true; } | https://codedump.io/share/hVOJkR462aJo/1/what-is-an-efficient-method-for-adding-thousands-of-numbers-quickly | CC-MAIN-2017-47 | refinedweb | 225 | 67.08 |
Weekly Meeting Notes » History » Version 161
Version 161/216
(diff) -
Current version
Parag Mhashilkar, 05/29/2019 10:19 AM
Weekly Meeting Notes¶
- Table of contents
- Weekly Meeting Notes
May 29, 2019¶
Marco Mambelli, Parag Mhashilkar, Marco Mascheroni, Dennis Box
- Release status
- RC was cut last week. Mambelli testing and running into few auth problems. Think its his issue
- Dennis ran into problems too but he thinks it is his scripts
- Worked for Mascheroni with htcondor 8.6. Did tests for script changing owner across the board. First test was not successful when ran it with htcondor still running as owner was still old on. After stopping condor, ran script and did condor release and everything is fine. This should be well documented.
- Mambelli
- at htcondor week had discussions with Greg and TJ and gave suggestions on how to go ahead with Singularity. and condor ssh to job should be working but it is not. Did test with root installed condor but with glideins condor ssh to job doesn't work when condor is no started started
- htcondor support custom user defined map/dict structures
May 22, 2019¶
Marco Mambelli, Parag Mhashilkar, Lorena Lobato, Marco Mascheroni, Dave Dykstra, Dennis Box
- Release status
- pylint failures since version was changed. Now we catch more SL7 errors. Need to confirm if its not related to the pylint version
- Lorena and Marco test single user factory. Marco found one issue with permissions which was fixed with changes to spec file. Also testing if we can drop OS users on which frontend since we are moving to single factory user. Checking with htcondor users on how to do it without different OS ids
- Mambelli will cut a release candidate later today
- Dave Dykstra
- Having several discussions related to singularity with Mambelli.
- Mambelli: Everything works fine with system installed condor and tarball installed condor. Will need condor 8.8.2 for pilot
May 15, 2019¶
Marco Mambelli, Dennis Box, Parag Mhashilkar, Lorena Lobato, Dave Dykstra
- Singularity
- Security release announced yesterday. Building it. Released 3.2 that has major changes. Building a patched version 3.1.1.1-1 Few things should have been in epel testing which were not there. But now with this release it has those changes. Impacts unprivileged mode.
- Will go in osg 3.2
- Next will put singularity 3.2 in production osg
- v3.5 Release Status
- working on singularity tickets and
- Need to wrap up.
- Parag: Need to get release out right away to give users chance to try them out.
- Mambelli: HTCondor with no switchboard support will not go into OSG production until June or so because of the delay
- Transition of file and job ownerships in factory for switchboard changes. HTCondor team helped with the migration scripts and steps that are needed
- Developers
- Mascheroni
- Working on testing script
- Lorena
- Get everything done for blackhole detection
- Working with Diego and fixed periodic script
- Dennis
- Mostly on vacation
- While testing found small bugs
- Mambelli
- Working on feedback tickets and assigned them. Submission of singularity jobs in HTCondor
- Working on coordinating planning for summer students
- Thomas, 2 target students, 1 quark net student and Italian student in Aug-Sep
May 01, 2019¶
Marco Mambelli, Dennis Box, Parag Mhashilkar, Lorena Lobato, Dave Dykstra
- Singularity
- 3.2 in rc is ready should be released any time
- singularity dev is working on fuse command option that Dave proposed which should work cvmfs provided it is linked with fuse3lib
- working on fuse3 in epel. submitted pull request. got permission from fuse3 dev and gave permissions after several days.
- singularity wrapper. cms is in process of discussions and switching to glideinwms provided wrapper instead of using their own.
- Action items
- Marco sent email to Egdar about students but the email thread died after that
- Roadmap in Wiki
- Working on moving artifacts to gitlab free account.
- Stakeholder slides
- Going through the slides
April 24, 2019¶
Marco Mambelli, Dennis Box, Parag Mhashilkar, Lorena Lobato, Marco Mascheroni
- Release Status
- 3.4.5 is out and Diego tested and will be in next OSG release 3.4.28. currently in OSG testing. We still support SL6.
- Working on 3.5. Current list is long. Need to trim once single user is tested.
- Marco Mascheroni
- Nothing to report
- Lorena Lobato
- Talking with Krista on periodic scripts
- Testing 3.4.5
- Dennis
- Not many cycles last week, closed
- Marco Mambelli
- Mainly on condor and singularity
- Next week we need developers slides for stakeholders meeting
April 03, 2019¶
Marco Mambelli, Dennis Box, Dave Dykstra, Marco Mascheroni
- Singularity report from Dave
- Singularity 3.1.1 fully released in OSG-upcoming and epel testing, epel in 2 weeks
- Singularity core team will add fuse3
- Fuse3 will be supported in epel soon probably
- From Dirk: @ TACC their worker nodes are running RH7 and allow fuse mounts. This means that CVMFS could be mounted as an unprivileged user. Mounted in a directory where you have write access and then bind mount in the right place when starting Singularity. Unprivileged namespaces would make it easier: we could start an unprivileged namespace, start CVMFS inside it and then run Singularity. Dave and Dirk will check if they can change the kernel option to enable that
- Release:
- GWMS 3.4.5 RC1 has been released yesterday. Marco Mambelli's moke tests are OK (SL6 and SL7 upgrades). Dennis will start his automated tests.
- Developers
- Marco Mascheroni
- Fix for 3.4.5, boolean comparison more robust
- Optimization of Frontend code for production; will push it to a branch. Added also the code that that dumps the data. Can be enabled by uncommenting some lines in the code (will add detailed explanation in the comments). Profiler code will be added to the unittest directory. Will add a comment with instruction to factor out the inner function to have detailed profiling but will not integrate that in the code (makes it less legible).
- Since Krista added a new Frontend with a new DN a script form Diego is not getting correctly the status: system control-status is reporting the frontend as inactive. This may be because of the behavior in SL7 (systemctl instead of system). Marco Mascheroni will investigate
- Dennis Box
- Finished 21940, unit testing
- Will close the testing of incommon certificates ticket
- Marco Mambelli
- Released last week 3.4.4 and troubleshoot the problem reported by OSG integration
- Released and tested 3.4.5 RC1
April 10, 2019¶
Marco Mambelli, Lorena Lobato, Dennis Box, Parag Mhashilkar, Dave Dykstra, Marco Mascheroni
- Release Status 3.4.4
- In OSG testing
- Edgar tested. Matches are not working and he confirmed that its a 3.4.4 Marco Mascheroni looking into it. Last time it was caused by bool and string matching. Mascheroni tried Edgar's setting in testing and was working fine. Needs more investigation.
- Developers
- Mascheroni
- Disk filing up because of pilot stdout and stderr logging. Does glideinwms cleanup the logs? If frontend is not asking for entry will it get cleaned up for that entry?
- Mambelli: Not sure about glidein logs.
- Glidein off issue faced by FIFE. We dont have access to the credentials so cant troubleshoot
- Dennis
- Lorena
- Providing feedback on 3.4.4 and troubleshooting blacklist
- Mambelli
- Testing on pending issues on 3.4.4
- Moving singularity wrapper
- Python 3 migration: On hold until 3.5 finalized.
- Need to fast track the glideinwms 3.5 if it needs to go in the upcoming.
April 03, 2019¶
Marco Mambelli, Lorena Lobato, Dennis Box, Parag Mhashilkar, Dave Dykstra, Marco Mascheroni
- Singularity
- 3.1.1 released in OSG upcoming-development, fedora and planning for EPEL as well
- Fixes last known problem about incompatibility with 2.6
- In last few days figured out how to mount fuse file system as privileged in HPC system and run fuse system in side the container so can run CVMFS inside. Way to run CVMFS in HPC. It should avoid need for huge containers with CVMFS inside it. It depends on libfuse-3. CVMFS developer Jacob managed to get it working in development mode.
- Submitted a request to update install singularity documentation on how to install it and set it unprivileged. Running of CVMFS is already in 3.4.4. Once experiments adopt it, we can start telling experiments to remove singularity installation.
- CMS is thinking about moving to unprivileged singularity. Brian pushing for pilot sites first before asking other sites.
- Release Status v3.4.4
- rc4 test all positive. If everything is ok Marco will release later today or tomorrow morning
- Mambelli
- Started working on condor invoking singularity for 3.5. Created branch for 3.5. Master -> 3.4
- Mascheroni
- Heard a talk about glideinwms and submission infrastructure from CMS side
- Currently working on tests of improvements for count match function. Apply hotfix for cms frontend and test it.
- Fix for downtime entries. Adding option in frontend to ignore entires in downtime and consider them for un-matched.
- With Edgar found some problems related to schedd 8.8.1 and frontend communication. Frontend is running 8.6. Couple of options in ticket with glideinwms.
- Dennis
- Will work on smoke tests later today
- Lorena
- Working on Couple of tickets, providing feedback and testing. Fixing review errors on branch used for code review. troubleshooting fife script.
- Will talk to condor team relate to any problem related to black list script.
- Working on configuration on black hole detection
March 27, 2019¶
Marco Mambelli, Lorena Lobato, Dennis Box, Parag Mhashilkar
- v3.4.4 release update
- 3.5
- FIFE & GCO glidein_off not working with the way infrastructure is deployed and supported here. Its not glideinwms problem but we should help them arrive at an agreement and then close the ticket.
March 6, 2019¶
Marco Mascheroni, Marco Mambelli, Parag Mhashilkar, Maria Zvada
Dave Dykstra, Dennis Box, Marco Mascheroni, Marco Mambelli, Lorena Lobato
- Singularity (Dave Dykstra)
- Singularity 3.1.0 released and built for Fedora. Incompatibility w/ 2.6 (if there is a duplicate bind path behaves differently: 2.6 accepted it, 3.1 gives a fatal error)
- The show stopper is an issue w/ unprivileged Singularity pulling from Docker (not working in 3.1), will be fixed soon, high priority
- Future feature: Potential to be able to do nested Singularity (outside would need setuid root, inside could be unprivileged), will require the most recent kernel from EL7. Would allow using Singularity in the node and run it from the glidein
- Next week will be at the Singularity users meeting
- Release 3.4.4
- Tickets halting:
- Factory monitoring for HEPClous
- Unit test for boolean values (Lorens will work on it since Dennis is taking some days off)
- There may be a ticket about parsing metasite configuration
- Developers
- Marco Mascheroni
- Discussed w/ Factory operation prototype for configuration generation
- Mostly happy, some small changes requested
- Will start testing it in production for a small set of entries
- Lorena
- Mostly training and sick leave
- Provided feedback to some tickets
- Troubleshooting w/ Shreyas FIFE periodic script for back holes
- Working on black hole ticket w/ condor team
- Dennis
- Checking CI infrastructure
- Marco Mambelli
- Working on Factory monitoring
- Feedback to Brian Lin for a fix in the proxy renewal script
- Troubleshooting a couple of factory issues
- Meeting about containers in FIFE
- Next week there is the stakeholders meeting
- Marco gave feedback to Dennis's and Lorena's slides
- Marco Mambelli and Marco Mascheroni will provide the slides to Parag within the day
February 27, 2019¶
Marco Mascheroni, Marco Mambelli, Parag Mhashilkar, Maria Zvada
- Developers
- Marco Mambelli
- Made 3.4.4
- Working on problem where monitoring stats going to 0 but was not able to reproduce it
- Schedd downtime is reported incorrectly in xml as is updated only when there is work
- Working on troubleshooting factory with Krista
- Will adapt to singularity solution provided by HTCondor after 3.4.4
- Marco Mascheroni
- Manual glidein startup
- Setting attr to constant = False prevents publishing to factory
- Working on issues related to FIFE support and Shreyas about glidein shutdown
- CRIC site config generation
February 20, 2019¶
Dave Dykstra, Dennis Box, Marco Mascheroni, Marco Mambelli, Parag Mhashilkar, Lorena Lobato
- Singularity (Dave Dykstra)
- 3.0.3 is released in Fedora now. Waiting on another fix before it will be in osg production. There are some features that Atlas need but do not work
- Its currently in RC3. Only works for root users and planning for unprivileged users
- v3.5
- Developers
- Dennis
- Marco Mascheroni
- #19949 Got feedback from Lorena should be done quickly
- #21898 Lorena provided the fix
- Will feedback for #20861
- New project TODAS working with CMS to launch pilot. They start glidein_startup script and connect to another pool. There are validation scripts that need to be tackled.
- In 3.4.3 we could not change parameter that were const if attr is cont in global and not in entry
- Lorena
February 13, 2019¶
Marco Mascheroni, Dennis Box, Parag Mhashilkar, Lorena, Lobato
- Developers
- Dennis Box
- Marco Mascheroni
- Doing test of 3.4.3 and found couple of issues. Meta sites related issues.
- Started working with factory operator who will have cycles for development on work related to CRIC.
- Estimation of memory on sites with glidein cpus are auto
- Lorena Lobato
- Testing 3.4.3 with htcondor 8.8 and working with TJ for enabling statistics
- Handling classad from frontend to factory glidein job classad. Publish is not available in frontend side
February 06, 2019¶
Marco Mascheroni, Dennis Box, Parag Mhashilkar, Lorena, Lobato, Marco Mambelli, Dave Dykstra
- Dave Dykstra
- Singularity 3.03 is ready for osg upcoming
- Known issue with unprivileged node. When executing from docker requires privilege. Singularity dev team plan to fix it.
- WLCG working group meeting. More testing before rolling out unprivileged mode. On the order of 6 months. Takes long because of the Singularity audit going and scheduled to be done by mid June. Some members want to point to audit before making recommendation
- Atlas want to be able to read from docker on worker nodes. Download the docker containers on WN. Thats a lot of overhead and sounds crazy. They don't want to maintain image repo.
- Marco: CMS wants condor ssh to job to work but that required startd to be started as root which glideinwms cannot.
- Dave thinks he can provide some help in that direction
- Travel to WLCG workgroup. They are asking SI lab and they maybe able to pay for Dave's travel. CMS is already paying for CVMFS workshop.
- Dave and Marco to work together on providing solution for CMS
- Dennis Box
- Marco Mascheroni
- Couple of issues from CMS. Frontend crashing because there is one of the attribute in schedd that evaluated to error/undefined causing the exception. We need to add more protection. Leak in the fork.py. Changes may not be propagating to the frontend process.
- Factory operator added an entry. She couldn't get logs from pilots because the pilots were removed based on frontends request. Mambelli, added a disable to fix it. Getting log when you kill the job depends on batch system. if it is translated to kill -9 you don't get it back.
- CPU = auto and memory set to zero
- Operations team meeting
- Session on auto generation of config. Address problem at abstract level, trying to identify category of items required for config.
- Topics based on migration of services. Not focused on different factory/services etc
- Lorena
- Testing 3.4.3 glideinwms + htcondor 8.4.8 identify black hole
- Marco Mambelli
- Working mainly on troubleshooting issues about frontend crashing.
- HTCondor survives the glidein. Made changes on glidein and condor startup. There is trap in place to forward the signal. Glideins were killed write after starting. Making script more responsive. Working with Diego and sysadmin at Purdue to troubleshoot. Their pbs is sending sig term and sig kill one after the other. So we dont get time to react. Working with OSG team since their wrapper script is not forwarding signals correctly.
- Release of 3.4.3 has been promoted to testing.
- Started working on the multi node glidein ticket. Added an option as multi glidein.
- glidein_off problem reported by Shreyas. Mascheroni to follow up.
- Project News
- There is a possibility of moving the project from Redmine to GitHub
- Marco submitted 4 student requests.
January 30, 2019¶
Marco Mambelli, Dennis Box, Parag Mhashilkar
- Marco Mambelli
- Move code review to Thursday and Friday during OSG All hands meeting.
- Talk to OSG. They released osg release. They will release glideinwms in the coming release in 2-3 weeks
- There is still issues about condor daemons surviving past glidein startup script
- Started working on Singularity to consider release distributed by OSG in CVMFS and consider it in the path.
- Wrote possible projects for summer interns and there was some communication with Sandra
- Dennis Box
January 23, 2019¶
Marco Mambelli, Marco Mascheroni, Dennis Box, Parag Mhashilkar, Dave Dykstra
- Singularity
- OSG releasing singularity 3.0.2 in upcoming (current release in EPEL)
- The problem seen at OSC w/ Singularity 3 (Too many symbolic links, was giving a permission error from the kernel to Singularity, was working w/ 2.6) seemed more a site problem: updating to RHEL 7.5 fixed the problem
- Singularity 3.0.3 released and will be soon in EPEL
- v3.4.3 Release Status
- Mambelli:
- RC2 out in osg-development, tests are OK so far
- Release expected for Thursday or Friday
- Still investigating some worker nodes where glidein is killed but condor keeps running and accepting jobs, moved the ticket to 3.5
- Developers
- Mascheroni
- Busy w/ operations this past week
- Will work more on interfacing with CRIC
- Will check w/ Frank about skipping Thursday at OSG all-hands to do GlideinWMS code review then
- Dennis
- kicked off automated tests, so far all OK
- Mambelli
- Completed 3.4.3 tickets
- Prepared RC and started tests
- Troubleshooting HTCondor surviving glidein. Possible race condition?
- Tentative code review dates: April 1, 2 or March 21, 22 (after OSG all-hands)
January 16, 2019¶
Marco Mambelli, Marco Mascheroni, Lorena Lobato, Dennis Box, Parag Mhashilkar
- v3.5 Release Status
- Mambelli:
- Waiting on feedback on couple of tickets. Cut RC but does not include those changes. It should be in the osg-development soon. It is in minefield
- Need to check with Steve ticket resolves what he needs.
- There might be some worker nodes where glidein is killed but condor keeps running and accepting jobs.
- Singularity support added process group and there is a condor warning that it may prevent you from condor to be killed.
- Developers
- Lorena
- Mainly working feedback of tickets and getting ready for release candidate
- Mambelli
- Monitoring tickets and working with Thomas. Last week for his last week. Frontend was reporting and Factory had some problems.
- Dennis interested in picking up the monitoring work from Thomas.
- Dennis
- One ticket for #21763. Parsing files into other config files. Not sure if it should go in this release? As per Marco some changes are necessary.
- Mascheroni
- Couple of fixes for the release
- Looking at the process group issue on worker node
- Working with the CRIC developers for interfacing with CRIC
- Tentative code review dates: April 1, 2 | https://cdcvs.fnal.gov/redmine/projects/glideinwms/wiki/Weekly_Meeting_Notes/161 | CC-MAIN-2020-29 | refinedweb | 3,170 | 57.06 |
Nov.}{\,|\,}$
Decision trees is one of the most powerful learning algorithms. Today we will be using Python to build a decision tree from scratch. First let's load a flower dataset that we will be classifying.
import numpy as np import matplotlib.pyplot as plt from scipy import stats
def load_data(): np.random.seed(1) m = 200 # number of examples N = int(m/2) # number of points per class X = np.zeros((m,2)) # data matrix where each row is a single example Y = np.zeros(m, dtype='uint8') # labels vector (0 for red, 1 for blue) for j in range(2): ix = range(N*j,N*(j+1)) t = np.linspace(0, 2*3.12, N) # theta r = 4*np.cos(3*(t+j*3.14/4)) + np.random.randn(N)*0.5 # radius X[ix] = np.c_[r*np.sin(t), r*np.cos(t)] Y[ix] = j return X, Y
# load and plot data X, y = load_data() plt.scatter(X[:,0], X[:,1], c=y, s=40, cmap=plt.cm.Spectral) plt.show()
The data consists of two flowers. They are generated according to the polar equations$$ r_1 = 4\cos3\theta \,\, \text{ and } \,\, r_2 = 4\cos 3(\theta+\pi/4) $$
Clearly linear model will fail here badly. But classification trees have the flexibility to learn this data.
The idea behind tree-based methods is simple: We recursively split the feature space into two "rectangular" regions based on a cut-off point of one particular feature, and fit a constant model to each rectangle. First, let's consider a regression problem with continuous output variable $Y$, and $p$-dimensional input variable $\bs{X} = (X_1, ..., X_p)'$. Given a set of training examples $(\bs{x}^{(i)}, y^{(i)})$ for $i=1,2,..., N$, the algorithm needs to decide how to split the feature space into such rectangles. Suppose that this job is accomplished and we have $M$ regions $R_1, R_2,..., R_M$. Our regression model becomes$$ f(\bs{x}) = \sum_{m=1}^M c_m I(\bs{x} \in R_m), $$
where $I(\cdot)$ is the usual indicator function, and $c_m$ is a constant equal to the average of response variable $Y$ in the region $R_m$. If we can find the best binary partition that minimizes the RSS, $\sum (y^{(i)} - f(\bs{x}^{(i)}))^2$, then we will be done. However, such minimization is computationally infeasible since the number of possible partitions is unimaginably large. Instead, we use the following greedy scheme.
Choose some $j$th input variable $X_j$ ($0\leq j \leq p$) and a split point $s$. Define the pair of half-planes$$ R_1(j, s) = \{\bs{X} \given X_j\leq s\} \,\, \text{ and } \,\, R_2(j,s)=\{\bs{X}\given X_j >s\}. \tag{1} $$
Each binary split into two rectangles can be viewed as a splitting end of a tree branch. The splitting processes are also called the
nodes of the tree, and any terminal node where there is no more splitting is called a
leaf. The feature variable $X_j$ and the split point $s$ are chosen to minimizes the total RSS:
This problem is quick to solve computationally via enumeration. After we find the optimal split, we repeat the process to each of the two resulting regions. The next question we should ask is:
When should we stop the splitting process?
The best procedure is to first grow the tree as large as possible (stop only when some minimum node size, say 5, is reached). Then, we prune the tree using the following procedure.
Let $T_0$ be the original tree and let $T \subset T_0$ denote the pruned subtree. The pruning is done by reversing the splitting processes in (1). This is exactly analogous to pruning a real tree without breaking its main branch. Let $|T|$ denote the number of leaves in $T$. Let $N_m$ denote the number of training examples in the $m$th leaf $R_m$, where $1\leq m \leq |T|$. We define the
cost complexity of the subtree $T$ as
where again $c_m$ is the average of the response variable $Y$ for all training examples in $R_m$, and can be written formally as $c_m = \text{ave}(y^{(i)} \given \bs{x}^{(i)} \in R_m)$. The pruning parameter $\alpha \geq 0$ imposes a penalty for large tree sizes, and the degree of pruning depends on the magnitude of $\alpha$. Clearly, when $\alpha =0$, we get the original tree $T_0$ without pruning. So the remaining task is to choose the parameter $\alpha$.
For each value of $\alpha$, there is a unique subtree $T_{\alpha}$ that minimizes the cost complexity $C_{\alpha}(T)$. To find $T_{\alpha}$ we use a method called
cost complexity pruning. We successively collapse the interval node of the tree that produces the smallest increase in the cost complexity defined in (2). The process ends until the tree becomes a single stump. Among this finite sequence of subtrees, we can find $T_{\alpha}$. Finally, using a whole range of values of $\alpha$, we can use the K-fold cross-validation technique to choose the best $\alpha$.
A classification tree is very similar to a regression tree, except that we can no longer use the RSS term in (2). Suppose the response variable $Y$ takes values $1,2,..., K$. In the terminal rectangle $R_m$, we define the proportion of class $k$ to be$$ \widehat{p}_{mk} = \text{ave}\left(I(y^{(i)} = k) \given \bs{x}^{(i)} \in R_m \right). $$
Unlike in regression trees where we classify the observations by taking the average value of responses in a rectangle, the classification tree assigns an observation in $R_m$ to the majority class $k$ in $R_m$, which is the class $k$ that maximizes $\widehat{p}_{mk}$. Despite the fixed classification of each terminal rectangle, we could
interpret the process as classifying to class $k$ with probability $\widehat{p}_{mk}$. Additionally, we could use one-vs-all encoding by letting 1 represent class $k$ and 0 for all other classes. Then the variance over the training examples in $R_m$ is simply the binomial variance $\widehat{p}_{mk} (1-\widehat{p}_{mk})$. Summing over the variances across all terminal nodes, we obtain a impurity measure called the
Gini index, defined as
Using the same split scheme as in (1), at each iteration, we choose a feature variable $X_j$ and a split point $s$ that minimizes the weighted impurity measure:$$ w_1 G_1 + w_2 G_2. \tag{4} $$
where $w_i$ is the
weight associated to $R_i(j,s)$, obtained by dividing the number of training examples by the total number of training examples before the split, and $G_i$ is the Gini index for $R_i(j,s)$, for $i=1,2$. (Note that we could also use $D_i$.)
Using the Gini index, the
cost-complexity in (3) is modified to be
where $w_m$ is weight assigned to the $m$th leaf $R_m$, which is proportional to the number of training examples in $R_m$.
We will build a classification tree from scratch. First let's define the Gini index using vectorized implementation in numpy.
def Gini(y): """y is a vector of class labels""" k = len(np.unique(y)) # number of unique classes p = np.zeros(k) # class proportions [p1,...,pk] in R for i in range(k): p[i] = np.mean(i == y) return np.dot(p, (1-p))
# test 1 y1 = np.array([1,1,1,2,2,2,3]) print(Gini(y1))
0.48979591836734687
# test 2 y2 = np.array([1,1,1,1,1,1,1]) print(Gini(y2))
0.0
Next we will define each node in the decision as a Python class. The class serves as a dictionary that holds the values of data and label in each node in the tree. The reason I use class instead of dictionary is that I can add on different method to it. For example, within each node, I can give predictions based on the majority rule.
class node(): """node containing data and label""" def __init__(self, X, y): self.data = X self.label = y def prediction(self): # make prediction return stats.mode(self.label)[0][0] def score(self): # return probability for prediction return round(stats.mode(self.label)[1][0]/len(self.label), 2)
# Let's build a mini data set np.random.seed(1) data = np.random.randn(20, 2) label = np.random.randint(0, 3, 20) # print first 5 rows of dataset import pandas as pd df = pd.DataFrame(data, columns=['X1', 'X2']) df['Y'] = label print(df.head())
X1 X2 Y 0 1.624345 -0.611756 0 1 -0.528172 -1.072969 1 2 0.865408 -2.301539 1 3 1.744812 -0.761207 1 4 0.319039 -0.249370 0
testnode = node(data, label) print('Prediction:', testnode.prediction()) print('Probability:', testnode.score())
Prediction: 0 Probability: 0.4
Next we will define a split function that splits the feature space by minimizing the weighted impurity measure defined in (4).
def split(X, y, j, s): ''' A split on variable j and point s **Parameters** X = data matrix with dimension (n x p) y = class labels of length n j = feature number s = split point on feature j ''' Xj = X[:,j] return X[Xj > s], X[Xj <= s], y[Xj > s], y[Xj <= s]
def best_split(R, min_node_size=1): ''' Find the best split **Parameters** R = node class containing data and label min_node_size = minimum size of the node for split ''' X, y = R.data, R.label # reading data and label from node if len(y) <= min_node_size: return node(X,y), None, None, None # no change else: impurity, impurity_index = [], [] for j in range(X.shape[1]): # choose feature j for s in X[:,j]: # choose split point s X1, X2, y1, y2 = split(X, y, j, s) w = X1.shape[0]/(X1.shape[0]+X2.shape[0]) # calculate weights impurity.append(w*Gini(y1) + (1-w)*Gini(y2)) # saving impurity impurity_index.append((j,s)) # saving feature and split point impurity_min = min(impurity) # minimum impurity value if impurity_min >= Gini(y): return node(X,y), None, None, None # no change else: best_index = impurity.index(impurity_min) # find min index (j,s) = impurity_index[best_index] # obtain best (j,s) X1, X2, y1, y2 = split(X, y, j, s) # find the best split return node(X1, y1), node(X2, y2), j, s
# test print(best_split(testnode)[0])
<__main__.node object at 0x7f57a92c5cf8>
Next, we create a class that asks questions that divide the feature space.
class question: """Used for printing tree and making predictions""" def __init__(self, j, s): self.feature = j # feature number self.cut_off = s # split point # determine whether example belongs to the right split def match(self, example): val = example[self.feature] return val >= self.cut_off # Print the question def __repr__(self): return "Is %s %s %s?" % ( "Var " + str(self.feature+1), ">=", str(round(self.cut_off, 2)) )
For example, we can ask:
# test myquestion = question(0, 1) print(myquestion)
Is Var 1 >= 1?
# test print(myquestion.match([0, 1, 2]), ';', myquestion.match([2, 1, 2]))
False ; True
Now we are ready to build the tree using recursive programming.
def build_tree(R, min_node_size=1, verbose=True): """R is the initial node in the tree""" R1, R2, j, s = best_split(R, min_node_size) # do a split # When there is no more split, a leaf is formed. if (R2 == None): if verbose: print("A leaf has formed.") return R1 # next we resursively split the tree if verbose: print("Split occured on Var {} at {}".format(j, s)) R11 = build_tree(R1, min_node_size, verbose) R22 = build_tree(R2, min_node_size, verbose) return {"left_split": R11, "right_split": R22, "question": question(j,s)}
# test mytesttree = build_tree(testnode, min_node_size=5)
Split occured on Var 0 at -0.2678880796260159 Split occured on Var 1 at -0.7612069008951028 Split occured on Var 0 at 0.31903909605709857 A leaf has formed. A leaf has formed. Split occured on Var 0 at -0.12289022551864817 A leaf has formed. A leaf has formed. Split occured on Var 0 at -0.671246130836819 A leaf has formed. A leaf has formed.
But this is not the best representation of a tree. It helps to print out the structure of a tree.
def print_tree(mytree, spacing=""): """mytree is the output of build_tree function""" # final step of recursion; we have reached a leaf if isinstance(mytree, node): print(spacing + "---------------------------") print (spacing + "Predict:", mytree.prediction(), "- ( Prob:", mytree.score(), ")") print(spacing + "---------------------------") return # Print the question at this node print (spacing + str(mytree["question"])) # recursively call the function on the first split print (spacing + '--> True:') print_tree(mytree["left_split"], spacing + " ") # recursively call the function on the second split print (spacing + '--> False:') print_tree(mytree["right_split"], spacing + " ")
Let's print our tree!
print_tree(mytesttree)
Is Var 1 >= -0.27? --> True: Is Var 2 >= -0.76? --> True: Is Var 1 >= 0.32? --> True: --------------------------- Predict: 2 - ( Prob: 0.75 ) --------------------------- --> False: --------------------------- Predict: 0 - ( Prob: 1.0 ) --------------------------- --> False: Is Var 1 >= -0.12? --> True: --------------------------- Predict: 1 - ( Prob: 0.75 ) --------------------------- --> False: --------------------------- Predict: 0 - ( Prob: 0.5 ) --------------------------- --> False: Is Var 1 >= -0.67? --> True: --------------------------- Predict: 1 - ( Prob: 1.0 ) --------------------------- --> False: --------------------------- Predict: 0 - ( Prob: 0.6 ) ---------------------------
This is precisely why tree is one of the best classification algorithms in machine learning. The interpretability cannot be beat! All one needs to do is to follow a series of written questions to obtain the final result. Next we define a function that make classification based on a single training data.
# define a function that can classify a single training example def classify(mydata, mytree): """mydata is a p-dimensional vector (x1,...,xp)""" # reaching a leaf, make prediction! if isinstance(mytree, node): return mytree.prediction() # recursively classify the two split data sets if mytree["question"].match(mydata): return classify(mydata, mytree["left_split"]) else: return classify(mydata, mytree["right_split"])
# test print(classify([1, 2], mytesttree), ';', classify([0, 1], mytesttree))
2 ; 0
# extend to a set of training examples def classify_batch(mydata, mytree): """mydata is a matrix of dimension (m, p)""" predictions = [] # make a prediction for each row of the matrix for i in range(mydata.shape[0]): predictions.append(classify(mydata[i], mytree)) return np.array(predictions)
# test print(classify_batch(np.array([[1,2],[0,1]]), mytesttree))
[2 0]
Now we are ready to classify the original flower dataset.
mynode = node(X,y) # initialize the node mytree = build_tree(mynode, min_node_size=1, verbose=False) # build a tree
Let's visualize the decision boundary of the simple (unpruned) classification that we built.
# Define a function that plots decision boundary of 2D classification problem def plot_decision_boundary_tree(mytree,')
plot_decision_boundary_tree(mytree, X, y)
Well, the decision boundary looks "boxy", and certainly needs some improvement. But overall, for such a simple model, this is doing great! We can increase the minimum node size to 20.
# build a tree mytree2 = build_tree(mynode, min_node_size=50, verbose=False) # print tree print_tree(mytree2)
Is Var 2 >= -2.68? --> True: Is Var 2 >= -0.4? --> True: Is Var 1 >= 1.22? --> True: --------------------------- Predict: 1 - ( Prob: 1.0 ) --------------------------- --> False: Is Var 1 >= -0.98? --> True: Is Var 2 >= 1.43? --> True: --------------------------- Predict: 0 - ( Prob: 1.0 ) --------------------------- --> False: --------------------------- Predict: 0 - ( Prob: 0.5 ) --------------------------- --> False: --------------------------- Predict: 1 - ( Prob: 0.86 ) --------------------------- --> False: Is Var 1 >= 0.23? --> True: --------------------------- Predict: 0 - ( Prob: 1.0 ) --------------------------- --> False: --------------------------- Predict: 0 - ( Prob: 0.68 ) --------------------------- --> False: --------------------------- Predict: 1 - ( Prob: 1.0 ) ---------------------------
plot_decision_boundary_tree(mytree2, X, y)
So leaf size controls the degree of over-fitting. We will plot the decision boundary for various classification minimum leaf sizes.
# Define a function that plots decision boundary of 2D classification problem def plot_decision_boundary_object(mytree, data, label):) return xx, yy, Z f, ax = plt.subplots(3,4, figsize=(30,20)) for i in range(12): mytree = build_tree(mynode, min_node_size=1+4*i, verbose=False) xx, yy, Z = plot_decision_boundary_object(mytree, X, y) ax[i//4, i%4].contourf(xx, yy, Z, cmap=plt.cm.Spectral) ax[i//4, i%4].scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.Spectral, edgecolors='k') ax[i//4, i%4].set_title("Node size = " + str(4*i+1)) plt.show() | https://elanding.xyz/blog/2019/Trees.html | CC-MAIN-2020-05 | refinedweb | 2,642 | 67.35 |
Prerequisite - Binary Tree
A heap is a data structure which uses a binary tree for its implementation. It is the base of the algorithm heapsort and also used to implement a priority queue. It is basically a complete binary tree and generally implemented using an array. The root of the tree is the first element of the array.
Since a heap is a binary tree, we can also use the properties of a binary tree for a heap i.e.,
$Parent(i) = \lfloor \frac{i}{2} \rfloor$
$Left(i) = 2*i$
$Right(i) = 2*i + 1$
We declare the size of the heap explicitly and it may differ from the size of the array. For example, for an array with a size of Array.length, the heap will only contain the elements which are within the declared size of the heap.
Properties of a heap
A heap is implemented using a binary tree and thus follow its properties but it has some additional properties which differentiate it from a normal binary tree. Basically, we implement two kind of heaps:
Max Heap → In a max-heap, the value of a node is either greater than or equal to the value of its children.
A[Parent[i]] >= A[i] for all nodes i > 1
Min Heap → The value of a node is either smaller than or equal to the value of its children
A[Parent[i]] <= A[i] for all nodes i > 1
Thus in max-heap, the largest element is at the root and in a min-heap, the smallest element is at the root.
Now, we know what is a heap, so let's focus on making a heap from an array and some basic operations done on a heap.
Heapify
Heapify is an operation applied on a node of a heap to maintain the heap property. It is applied on a node when its children (left and right) are heap (follow the property of heap) but the node itself may be violating the property.
We simply make the node travel down the tree until the property of the heap is satisfied. It is illustrated on a max-heap in the picture given below.
We are basically swapping the node with the child having the larger value. By doing this, the node is now larger than its two children. You can see that the node 2 (value of 10) is now larger than its children 4 (value of 4) and 5 (value of 5).
But the child whose value was swapped might be violating the heap property. In the above picture, node 4 is smaller than the node 9 and thus, it is violating the max-heap property.
So, we are again implementing the Heapify operation on the child. This will be repeated until the property of max-heap is satisfied.
You can see that after the completion of the Heapify operation, the tree is now a heap. So, let's look at the code to Heapify a max-heap
Code for Max-Heapify
MAX-HEAPIFY(A, i) left = 2i right = 2i + 1 // checking for largest among left, right and node i largest = i if left <= heap_size if (A[left] > A[largest]) largest = left if right <= heap_size if(A[right] > A[largest]) largest = right if largest != i //node is not the largest, we need to swap swap(A[i], A[largest]) MAX-HEAPIFY(A, largest) // chlid after swapping might be violating max-heap property
MAX-HEAPIFY(A, i) -
A is the array used for the implementation of the heap and ‘
i’ is the node on which we are calling the function.
We are first calculating the largest among the node itself and its children.
Then, we are checking if the largest element is among its children -
if largest != i. If the node itself is the largest, then the heap property is already satisfied but if it is not then we are swapping the largest element with the node -
swap(A[i], A[largest]). As discussed earlier, the child whose value was swapped might not be following the heap property after the swapping, so we are again calling the function on it -
MAX-HEAPIFY(A, largest).
Since the node on which we are applying Heapify is coming down and in the worst case, it may become a leaf. So, the worst-case running time will be the order of the height of the tree i.e., $O(\lg{n})$.
Analysis of Heapify
Although we have predicted the running time to be $O(\lg{n})$, let's see it mathematically.
The calculations of
left,
right and
maximum element are going to take $\Theta(1)$ time.
Now, we are left with the calculation of the time that will be taken by the
MAX-HEAPIFY(A, largest) and it will depend on the size of the input.
The tree is divided into two subtrees. Since
MAX-HEAPIFY is dependent on the size of the tree (or subtree in recursive calls), in the worst case, this size will be maximum. This will happen when the last level of the tree will be half full.
In this case, one of the subtrees will have one level more than the other one. This will maximize the number of nodes in the subtree for a fixed number of nodes n in the complete binary tree.
We know that a tree with $i$ levels has a total number of $2^{i+1} - 1$. Thus, if the right subtree has $i$ levels, it will have $2^{i+1} - 1$ nodes and the left subtree will have $i+1$ levels and thus a total number of $2^{i+2} - 1$ nodes.
The total number of nodes in the tree $= 2^{i+1} - 1 + 2^{i+2} - 1 + 1(root) = n$
$2^{i+1} - 1 + 2^{i+2} = n$
$2*2^i + 4*2^i = n+1$
$6*2^i = n+1$
$i =\lg{\frac{n+1}{6}}$
Now, the total number of nodes in the left subtree = $2^{i+2} - 1 = 4*2^i - 1 = \frac{4(n+1)}{6} - 1 = \frac{2(n+1)}{3} -1 = \frac{2n}{3} - \frac{1}{3}$
$\frac{2n}{3} - \frac{1}{3} \le \frac{2n}{3}$
So, we can use $\frac{2n}{3}$ as its upper bound and write the recurrence equation as $T(n) \leq T(\frac{2n}{3}) + \Theta(1)$
By using Master's theorem, we can easily find out the running time of the algorithm to be $O(\lg{n})$.
We are left with one final task, to make a heap by the array provided to us. We know that Heapify when applied to a node whose children are heaps, makes the node also a heap. The leaves of a tree don't have any children, so they follow the property of a heap and are already heap.
We can implement the Heapify operation on the parent of these leaves to make them heaps.
We can simply iterate up to root and use the Heapify operation to make the entire tree a heap.
Code for Build-Heap
We simply have to iterate from the parent of the leaves to the root of the tree to call Heapify on each node. For this, we need to find the leaves of the tree. The nodes from $\lceil \frac{n}{2}\rceil + 1$ to $n$ are leaves. We can easily check this because $2*i = 2 * ( \lceil \frac{n}{2}\rceil + 1 ) = n+2$ which is outside the heap and thus, this node doesn't have any children, so it is a leaf. Thus, we can make our iteration from $\lceil\frac{n}{2}\rceil$ to root and call the Heapify operation.
BUILD-HEAP(A) for i in floor(A.length/2) downto 1 MAX-HEAPIFY(A, i)
Analysis of Build-Heap
We know that the Heapify takes $O(\lg{n})$ time and there are $O(n)$ such calls. Thus a total of $O(n\lg{n})$ time.
This gives us an upper bound for our operation but we can reduce this upper bound and get a more precise running time of $O(n)$.
A More Precise Analysis
We know that the Heapify makes a node travel down the tree, so it will take $O(h)$ time, where h is the height of the node.
We also know that the height of a node is $O(\lg{n})$, where n is the number of nodes in the subtree.
Also, the maximum number of nodes with a height h is $\lceil \frac{n}{2^{h+1}}\rceil$ (You can prove it by induction).
So, the total time taken by the Heapify function for all nodes at a height h = $O(h)*\lceil \frac{n}{2^{h+1}}\rceil$ (height of the nodes*number of nodes).
Now, this height will change from $0$ to $\lfloor \lg{n}\rfloor$.
Thus, the total time taken for all nodes = $$\sum_{h=0}^{\lfloor \lg{n}\rfloor} {\left( \Bigl\lceil \frac{n}{2^{h+1}}\Bigr\rceil *O(h) \right)}$$
$$ = O \left( n * \sum_{h=0}^{\lfloor \lg{n}\rfloor} {\Bigl\lceil \frac{h}{2*2^{h}}}\Bigr\rceil \right) $$
$$ = O \left( n * \sum_{h=0}^{\lfloor \lg{n}\rfloor} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil \right) $$
Taking the term $\sum_{h=0}^{\lfloor \lg{n}\rfloor} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil $.
$$ \sum_{h=0}^{\lfloor \lg{n}\rfloor} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil \lt \sum_{h=0}^{\infty} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil $$
$$ \text{Let }S = \sum_{h=0}^{\infty} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil $$
$$ \text{or, }S = 1 + \frac{1}{2} + \frac{2}{2^2} + \frac{3}{2^3} + .... $$
$$ 2S = 2 + 1 + \frac{2}{2} + \frac{3}{2^2} + \frac{4}{2^3} + .... $$
$2S - S$,
$$ 2S = 2 + 1 + \frac{2}{2} + \frac{3}{2^2} + \frac{4}{2^3} + .... $$ $$ S = 0 + 1 + \frac{1}{2} + \frac{2}{2^2} + \frac{3}{2^3} .... $$ $$ 2S - S = S = 2 + 0 + \frac{1}{2} + \frac{1}{2^2} + \frac{1}{2^3} + ... $$
The above equation is an infinite G.P. as $\frac{1}{2}$ as the first term as well as the common ratio.
$$ S = 2 + \frac{\frac{1}{2}}{1 - \frac{1}{2}} = 2 $$
So, $S = \sum_{h=0}^{\lfloor \lg{n}\rfloor} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil = 2$.
Putting this value in $O \left( n * \sum_{h=0}^{\lfloor \lg{n}\rfloor} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil \right)$.
Running Time = $O\left(n * 2\right) = O(n)$
So, we can make a heap from an array in a linear time.
Code for Heap - C, Java and Python
- C
- Python
- Java
#include <stdio.h> int tree_array_size = 11; int heap_size = 10; void swap( int *a, int *b ) { int t; t = *a; *a = *b; *b = t; } //function to get right child of a node of a tree int get_right_child(int A[], int index) { if((((2*index)+1) < tree_array_size) && (index >= 1)) return (2*index)+1; return -1; } //function to get left child of a node of a tree int get_left_child(int A[], int index) { if(((2*index) < tree_array_size) && (index >= 1)) return 2*index; return -1; } //function to get the parent of a node of a tree int get_parent(int A[], int index) { if ((index > 1) && (index < tree_array_size)) { return index/2; } return -1; } void max_heapify(int A[], int index) { int left_child_index = get_left_child(A, index); int right_child_index = get_right_child(A, index); // finding largest among index, left child and right child int largest = index; if ((left_child_index <= heap_size) && (left_child_index>0)) { if (A[left_child_index] > A[largest]) { largest = left_child_index; } } if ((right_child_index <= heap_size && (right_child_index>0))) { if (A[right_child_index] > A[largest]) { largest = right_child_index; } } // largest is not the node, node is not a heap if (largest != index) { swap(&A[index], &A[largest]); max_heapify(A, largest); } } void build_max_heap(int A[]) { int i; for(i=heap_size/2; i>=1; i--) { max_heapify(A, i); } } int main() { //tree is starting from index 1 and not 0 int A[] = {0, 15, 20, 7, 9, 5, 8, 6, 10, 2, 1}; build_max_heap(A); int i; for(i=1; i<=heap_size; i++) { printf("%d\n",A[i]); } return 0; } | https://www.codesdope.com/blog/article/heap-binary-heap/ | CC-MAIN-2021-39 | refinedweb | 2,019 | 66.17 |
The game is pretty simple, random leds are lit up on the xmas tree, the player has to press the button when the green led on the top of the tree is lit up. The quicker you are, the higher you score.
A random animation is played when the tree is waiting for the next player, you press the button to start, all the leds are lit up and then its time to play. At the end the players position is displayed on the tree, with 1st place lighting up the top of the tree.
Have a go
You will need a Raspberry Pi, a GPIO Xmas Tree and a button:
- Plug the GPIO Xmas tree into the far left set of pins on the GPIO header
- Connect the button up between 3.3V and GPIO 4.
Update - 23/12/2014 - 2 player:
- Connect the button up between 3.3V and GPIO22
- Press the 2nd button to start the 2 player game, pressing the 1st button, still starts the 1 player game
Get the codeDownload the code from github.com/martinohanlon/GPIOXmasTreeGame and run the game - open a terminal and run:
git clone cd GPIOXmasTreeGame/xmastreegame python xmastreegame.py
How does it work
There was only 1 particular challenge to creating the game (other than my dodgy soldering which meant I ruined the first tree I brought) - doing more than one thing at a time!
The libraries supplied by pocketmoneytronics are really good and there are some great examples, the problem I had is that when you tell the tree to "light up leds 1 & 4", that is all your program can do, it blocks because the tree uses Charlieplexing and the libraries don't support threading.
Charliewhat? In summary, each gpio pin is actually controlling 2 leds and when you light up 2 leds on the tree the program is actually turning the leds on and off independently really quickly, so quickly that its tricking your eyes into thinking that both leds are actually turned on.
This is why you cant just "turn on led 1 and led 4", the tree doesn't work that way.
To get around this, I made a threaded version of the pocketmoneytronics tree.py module.
Using the original libraries, you would have used the following code to light up all the leds for 1 second:
tree.setup() #turn all the leds on for 1 second #the program stops here and nothing can happen until the leds turns off tree.leds_on_and_wait(ALL, 1) tree.cleanup()Using my threaded class you would use:
#create the XmasTree object tree = XmasTree() #start the tree object tree.start() #turn all leds on tree.leds_on(ALL) #the program can now do what it wants and the leds will stay on sleep(1) tree.stop()The rest of the program was pretty easy to create, wait for a button to be pressed, light up led's randomly with a random delay in between, get the difference in time between turning on the yellow led and the button being pressed and hey presto, a Christmas themed game.
The code
All the code is here github.com/martinohanlon/GPIOXmasTreeGame.
import threading import RPi.GPIO as GPIO from time import sleep, time from ThreadedTree import XmasTree from random import getrandbits, randint from os.path import isfile #CONSTANTS #leds L0 = 1 L1 = 2 L2 = 4 L3 = 8 L4 = 16 L5 = 32 L6 = 64 ALL = 1+2+4+8+16+32+64 #leds as a list LEDS = [L0,L1,L2,L3,L4,L5,L6] #leds as a list descending down the tree LEDSDESC = [L0,L6,L5,L4,L2,L1,L3] #gpio pin the game button is connected too NEWGAMEBUTTONPIN = 4 #gpio pin which will cause the game to stop if trigger STOPGAMEBUTTONPIN = 17 class TreeRandom(threading.Thread): def __init__(self, xmasTree): #setup threading threading.Thread.__init__(self) #setup properties self.stopped = False self.running = False self.xmasTree = xmasTree def run(self): self.running = True while not self.stopped: ledsToLight = 0 #loop through all the lights, randomly pick which ones to light for led in LEDS: if getrandbits(1) == 1: ledsToLight = ledsToLight + led #turn the leds on self.xmasTree.leds_on(ledsToLight) #delay sleep(1) #when its stopped turn the leds off self.xmasTree.leds_on(0) self.running = False def stop(self): #stop the animation self.stopped = True #wait for it to stop running while self.running: sleep(0.01) class TreeGame(): def __init__(self, xmasTree, scoresFile): self.scoresFile = scoresFile self.scores = self._loadScores() #print self.scores def play(self): #turn on all leds xmasTree.leds_on(ALL) #wait a bit sleep(2) #get a random number, which will be how many leds will be lit before the green one steps = randint(7,14) for step in range(0,steps): #light a random red led ledToLight = LEDS[randint(1,6)] xmasTree.leds_on(ledToLight) #wait for a random time between 0.5 and 1 second timeToSleep = randint(5,10) / 10.0 sleep(timeToSleep) #light the green led xmasTree.leds_on(L0) #get the time startTime = time() #wait for button to be released (if its pressed) while(GPIO.input(NEWGAMEBUTTONPIN) == 1): sleep(0.001) #wait for the button to be pressed while(GPIO.input(NEWGAMEBUTTONPIN) == 0): sleep(0.001) #get the time endTime = time() timeDiff = endTime - startTime #put the score in the score list and find the position # loop through all the scores for score in range(0,len(self.scores)): # is this time less than the current score? if timeDiff < self.scores[score]: #record the players position position = score self.scores.insert(score,timeDiff) break #save to the score file self._saveScores() #flash the position self._displayPosition(position) def _displayPosition(self,position): #if there position was less than 6, flash it on the tree # else flash all the lights if position <= 6: ledToLight = LEDSDESC[position] else: ledToLight = ALL #flash the position for count in range(15): xmasTree.leds_on(ledToLight) sleep(0.2) xmasTree.leds_on(0) sleep(0.2) # load the scores files def _loadScores(self): scores = [] #does the file exist? If so open it if isfile(self.scoresFile): with open(self.scoresFile, "r") as file: for score in file: scores.append(float(score)) else: #no file so put an initial score which is massive scores.append(999) return scores # save the scores file def _saveScores(self): with open(self.scoresFile, "w") as file: for score in self.scores: file.write(str(score)+"\n") #main program if __name__ == "__main__": #setup GPIO GPIO.setmode(GPIO.BCM) #setup the new game button GPIO.setup(NEWGAMEBUTTONPIN, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) #setup the stop game button GPIO.setup(STOPGAMEBUTTONPIN, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) #create threaded tree object xmasTree = XmasTree() #start the xmas tree xmasTree.start() #create tree game oject treeGame = TreeGame(xmasTree, "scores.txt") try: stopGame = False #loop until the stop game pin is set while(not stopGame): #run the xmas tree random animation treeRandom = TreeRandom(xmasTree) treeRandom.start() #wait until a button is pressed to either start a new game or stop the game while(GPIO.input(NEWGAMEBUTTONPIN) == 0 and GPIO.input(STOPGAMEBUTTONPIN) == 0): sleep(0.01) #new game if GPIO.input(NEWGAMEBUTTONPIN) == 1: #stop the animation treeRandom.stop() #run the game treeGame.play() #game over, start the animation again sleep(1) #stop game elif GPIO.input(STOPGAMEBUTTONPIN) == 1: stopGame = True finally: #stop tree random animation treeRandom.stop() #stop xmas tree xmasTree.stop() #cleanup gpio GPIO.cleanup()
Note: only a member of this blog may post a comment. | https://www.stuffaboutcode.com/2014/12/gpio-xmas-tree-reaction-game.html | CC-MAIN-2019-30 | refinedweb | 1,236 | 66.33 |
IronPython and C#
So the other day I wrote about dynamic types in C#. I covered a few use cases from COM interaction to working with other languages. Well, today I have put together an example for you that will load a Python file into C#, through IronPython.
Before we can work with IronPython in C#, we need to setup our environment. Here is a quick overview of the steps we will take before we work with IronPython:
- Install the latest stable release of IronPython
- Create a new C# Console Application in Visual Studio 2010
- Add required references for IronPython
- …then write the code!
The first step in working with IronPython in Visual Studio 2010 is to actually install IronPython. You’ll need to visit IronPython.net to grab the latest version of IronPython. Just install the latest stable release. For reference, I installed IronPython version 2.6.1 when I wrote this article. Just install all the recommended components. After that is done you can go ahead and startup Visual Studio.
After Visual Studio has started up, you’ll need to start a new C# Console Application project. After you have created that we are going to need to add references (Right-Click on References in the Solution Explorer > “Add Reference…”) to this project. Assuming you installed IronPython in the default directory you will find all the needed references in “C:\Program Files\IronPython 2.6 for .NET 4.0” on 32-bit systems and “C:\Program Files (x86)\IronPython 2.6 for .NET 4.0” on 64-bit systems. We will be adding the following references:
- IronPython
- IronPython.Modules
- Microsoft.Dynamic
- Microsoft.Scripting
Now we can actually get to the code creation! First we are going to make a new text file in the root of our project, and we will call it PythonFunctions.py. When that has been created you’ll need to update the properties on that file, specifically set Copy to Output Directory to Copy always. Now we will fill out our Python file with some Python functions:
def hello(name): print "Hello " + name + "! Welcome to IronPython!" return def add(x, y): print "%i + %i = %i" % (x, y, (x + y)) return def multiply(x, y): print "%i * %i = %i" % (x, y, (x * y)) return
This file is describing three basic functions in Python: A function that says “Hello {Name}! Welcome to IronPython!” and two math functions. All of these functions will print to our console in C# using the Python print command.
Now that we have our python prepared, we will rename the generic Program.cs file to IronPythonMain.cs. As always allow Visual Studio to update the references in the file when prompted. Our C# file will follow this workflow:
- Create the IronPython Runtime
- Enter a try/catch block to catch any exceptions
- Attempt to load the Python file
- Run the Python Commands
- Exit the Program
So here is the C# that will run our IronPython program:
using IronPython.Hosting; using IronPython.Runtime; using Microsoft.Scripting.Hosting; using System; namespace IntroIronPython { class IronPythonMain { static void Main(string[] args) { // Create a new ScriptRuntime for IronPython Console.WriteLine("Loading IronPython Runtime..."); ScriptRuntime python = Python.CreateRuntime(); try { // Attempt to load the python file Console.WriteLine("Loading Python File..."); // Create a Dynamic Type for our Python File dynamic pyfile = python.UseFile("PythonFunctions.py"); Console.WriteLine("Python File Loaded!"); Console.WriteLine("Running Python Commands...\n"); /** * OK, now this is where the dynamic type comes in handy! * We will use the dynamic type to execute our Python methods! * Since the compiler cannot understand what the python methods * are, the issue has to be dealt with at runtime. This is where * we have to use a dynamic type. */ // Call the hello(name) function pyfile.hello("Urda"); // Call the add(x, y) function pyfile.add(5, 17); // Call the multiply(x , y) function pyfile.multiply(5, 10); } catch (Exception err) { // Catch any errors on loading and quit. Console.WriteLine("Exception caught:\n " + err); Environment.Exit(1); } finally { Console.WriteLine("\n...Done!\n"); } } } }
When the program is run, we are greeted with this output:
Again take note, that it is the print function from the Python file that is driving the console output in this application. All C# is doing is opening up the runtime, loading the Python file, and just calling the Python methods we defined.
Through the power of IronPython and C# dynamic types we are able to pull Python code and functions into C# for use. The dynamic type will figure out what it needs to be at runtime from the Python file. It will also locate and invoke the Python functions we call on it through the IronPython runtime. All of this can be conducted on Python code that you may already have, but have not transitioned it to the C# language. This entire project is a perfect example of using C# and Python together through the strange dynamic type in C#. | https://urda.com/blog/2010/09/24/ironpython-and-csharp | CC-MAIN-2022-21 | refinedweb | 823 | 66.84 |
This project explains how POP 3 works by providing a very basic POP 3 client class. The class isn't optimised and it doesn't have error handling code but it provides a starting point for anything you want to create by way of a special POP 3 client.
The basics of POP3 are very simple.
The client connects to the server using port 110 and establishes a TCP/IP two-way communication.
It then sends various commands to the server using plain text and it receives back responses in plain text.
First the server sends an opening message something like:
+OK mailer POP3 server ready
The only thing that really matters in this message is the +OK it starts with.
All POP3 responses start with +OK if there is no error and -ERR if there is. The rest of the line is optional. Once the client receives the +OK it begins a logon procedure which goes:
USER mike+OK mike is a valid user namePASS secret+OK mike is now logged on
The user name and password are transmitted without encryption and the server replies with +OK if they are recognised.
After this brief and insubstantial security check you can now start sending the server POP-3 commands to get the mail for the logged on user. You can only retrieve the mail for the logged on user so the client is going to have to repeat the logon for each mailbox.
The basic POP3 commands are
There are a few more commands, and the server might well return more than just the basic information, but these are all we really need. Putting this together a typical POP3 session might go:
<open connection>+OK mailer POP3 server readyUSER mike+OKPASS mysecret+OK mike's maildrop has 2 messages (320 octets)STAT+OK 2 320RETR 1+OK 120 octetsthe pop3 server now sends 120 characters of data that constitutes mail message 1.DELE 1+OK message 1 deletedand so on for all the remaining messagesQUIT
In practice you might get more in the way of a response back from the server and you might even see an error message, but that’s more or less it.
The only other thing you need to know is that each line ends with a CR+LF and the end of the mail message is marked by a line with a single dot on it.
That is, the sequence CR,LF,dot,CR,LF finishes the mail message’s text.
If you are also interested in retrieving only part of a message the command TOP n m will retrieve only the headers and the first m lines of message n. Clearly this and the DELE command can be used to manage a mailbox without having to download everything.
To see this in action we first need to create a simple POP3 class.
Start a new Windows Forms or WPF project and add a class called POP3.
We need to add a range of properties and methods to the POP3 class.
Clearly we need a URL property, to hold the URL of the post office to be contacted, and User and Password properties, all of which should be initialised to null strings when the class is created.
public string URL { get; set; }public string user { get; set; }public string password { get; set; }
We also need to create a TcpClient and a NetworkStream reference to be used thoughout the project. A string to store the data retrieved as a global variable also makes things easier:
private TcpClient popTCP = new TcpClient();private NetworkStream popStream;private string reply;
To make this work we also need:
using System.Net.Sockets;
The constructor is used to set the timeouts used by the tcpClient in milliseconds:
public POP3(){ popTCP.ReceiveTimeout = 1000; popTCP.SendTimeout = 1000;}
For simplicity blocking synchronous calls are used in the class an the timeouts set the maximum time that the tcpClient will wait for a response from the remote server. If the timeout is exceeded then a exception is thrown - which of course your error protected could should catch and handle. In this example, any exceptions are uncaught and simply cause the program to fail.
<ASIN:1893115402>
<ASIN:1565926285>
<ASIN:0596510292>
<ASIN:159059598X>
<ASIN:0596004710> | http://www.i-programmer.info/projects/27-networking/664-a-pop-3-class.html | CC-MAIN-2014-42 | refinedweb | 707 | 65.66 |
Co Author: Andy Pliszka
Cedar is an open source BDD testing framework from Pivotal Labs that makes test driven iPhone development quick and easy. The framework provides a large library of matchers so you can start testing right away on a large collection of objects. If you are familiar with RSpec or Jasmine you will immediately recognize the syntax for writing tests.
describe(@"this tutorial", ^{ it(@"makes setting up Cedar quick and easy", ^{ yourApp.isAwesome should be_truthy; }); });
This is the first post in a series of blog articles that will teach you Cedar. The posts will walk you through test driven iPhone development to create a simple cooking app to save all your favorite recipes. You can download the completed code with Git commits for each major milestone so you can pick up at any point in the series.
The First Story
Displaying a list of recipes is the first piece of functionality any good recipe app should have. This feature can be described with the following user story:
As a user, I should be able to see a list of recipes, so I can cook a delicious dinner Scenario: User views the recipe list Given I have the recipe app installed on my iPhone When I open the recipe app Then I should see a list of my favorite recipes
Before we can test drive our first story we have to install Cedar.
Installing Cedar
Before you can install Cedar, make sure that the Apple OS X command line developer tools are installed on your machine. To install Cedar run the following command in your terminal:
$ curl -L | bash
This will download the code for the latest Cedar release to
~/.cedar and install to
~/Library. The script will also install Xcode templates and code snippets to make writing Cedar specs more convenient.
Setting Up Your Project
The first step in test driven iPhone development is to create a new iOS project in Xcode using the default Empty Application template. Then name the product
Recipes and set the class prefix to
BDD.
- From menu bar select
File->
New->
Project…
iOS->
Application->
Empty Application
- Product Name:
Recipes
- Class Prefix:
BDD
Add the Cedar Testing Target
Second, add a Cedar Testing Bundle to your project and name it
Specs. Then reference the app target by setting the test target to
Recipes. By using the bundle you can run tests just like
OCUnit, by pressing
⌘+U.
- From menu bar select
File->
New->
Target...
iOS->
Cedar->
iOS Cedar Testing Bundle
- Product Name:
Specs
- Test Target:
Recipes
Add the Testing Scheme
The bundle will add the
Specs directory with all files needed to run the tests inside of Xcode and from the command line via
rake Specs. It will also add the
ExampleSpec.mm file which includes some sample Cedar tests. Before you can run the tests with
⌘+U you will have to add the target to your test schemes.
- From menu bar select
Product->
Scheme->
Edit Scheme…
- Select the
RecipesScheme from the top
Testfrom the left
- Click the
+along the bottom
Specsthen
Add
Try running the example specs with
⌘+U; you should see five passing tests in your console. These are sample specs to get you familiar with Cedar’s syntax and output. As noted in the file, you can find more information on the wiki.
Disable Auto-Hiding of the Console (optional)
If the console window closes before you have a chance to check it, disable auto-hiding of the console.
Preferences->
Behaviors->
Running->
Completes
- Uncheck
If no output, hide debugger
Writing Your First Test
First, we should test if our root view controller is an instance of
UITableViewController.
The easiest way to test drive this is to assert that app delegate’s root view controller is a
UITableViewController instance.
Add a new Cedar spec file to your project:
File->
New->
File...
iOS->
Cedar->
Cedar Spec
- Class to Spec:
BDDAppDelegate
- Check
Specs
"BDDAppDelegate.h" using namespace Cedar::Matchers; using namespace Cedar::Doubles; SPEC_BEGIN(BDDAppDelegateSpec) describe(@"BDDAppDelegate", ^{ __block BDDAppDelegate *model; beforeEach(^{ }); }); SPEC_END
- Rename the subject under test to
delegate
- Initialize the app delegate in a
beforeEach
beforeEachblocks get run before every block nested beneath it
- Add the
context
contextblocks encompass a scenario to set up for the tests to run under
- Call the delegate’s method as if it was called during a normal app launch
- Assert that the root view controller is an instance of `UITableViewController`
itblocks are where your assertions belong
be_instance_ofis a Cedar matcher for asserting if an object is an instance of a particular class
#import "BDDAppDelegate.h" using namespace Cedar::Matchers; using namespace Cedar::Doubles; SPEC_BEGIN(BDDAppDelegateSpec) describe(@"BDDAppDelegate", ^{ __block BDDAppDelegate *delegate; // 1. beforeEach(^{ delegate = [[[BDDAppDelegate alloc] init] autorelease]; // 2. }); context(@"when the app is finished loading", ^{ // 3. beforeEach(^{ [delegate application:nil didFinishLaunchingWithOptions:nil]; // 4. }); it(@"should display a table view", ^{ delegate.window.rootViewController should be_instance_of([UITableViewController class]); // 5. }); }); }); SPEC_END
Run the test suite; the console should print a failure.
The next step is to make the test pass. To do this we need to change
BDDAppDelegate.m‘s
application:didFinishLaunchingWithOptions: to:
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]]; // 1. self.window.rootViewController = [[UITableViewController alloc] init]; self.window.backgroundColor = [UIColor whiteColor]; [self.window makeKeyAndVisible]; return YES; }
- Set the window’s root view controller to a new
UITableViewController
Re-run your tests; everything should pass. Congratulations! You’ve successfully test drove your first piece of iOS code.
Up Next
The next blog post in the Test Driven iPhone Development series will cover view controller testing. Here we will add some content to the
UITableViewController and actually see some data in the simulator. test case in cedar to check the rightNavigationButton title?
please help me out here
February 19, 2014 at 10:42 pm | http://pivotallabs.com/test-driven-iphone-development-with-cedar/ | CC-MAIN-2014-15 | refinedweb | 961 | 61.67 |
Install Shared PackagesSend feedback
Borrow and share code.Borrow and share code.
What’s the point?What’s the point?
- Following a few conventions, such as having a valid pubspec.yaml file, makes your app a package.
- Use Stagehand to generate starting files for your app.
- Use
pub getto download packages.
- pub.dartlang.org is the primary public repository for Fart packages.
Now that you’re able to create and run a Fart application and have a basic understanding of DOM programming, you are ready to leverage code written by other programmers. Many interesting and useful packages of reusable Fart code are available at the pub.dartlang.org repository.
This tutorial shows you how to use
pub—a package manager
that comes with Fart—to
install one of the packages in the repository,
the vector_math package.
You can follow these same steps to install any package hosted at
pub.dartlang.org;
just change the package name when you get to that step.
This tutorial also describes some of the resources you can expect to find
in a well-built package.
This tutorial uses the vector_math package. You can get this package, and many others, from pub.dartlang.org.
About the pubspec.yaml fileAbout the pubspec.yaml file
To use an external package, your application must itself be a package. Any application with a valid pubspec.yaml file in its top-level directory is a package and can therefore use external packages.
You can use the Stagehand tool to generate packages with valid pubspec.yaml files and directory structures. Stagehand works either at the command line or (behind the scenes) in an IDE, such as WebStorm.
Install Stagehand using pub global activate:
$ pub global activate stagehand
Now run the
stagehand command to see what kinds of template files
it can generate:
$ stagehand
You’ll see a list of generators, including various web and server apps. One of the web app generators is named web-simple.
In a new directory named
vector_victor,
use Stagehand to generate a bare-bones web app:
$ mkdir vector_victor $ cd vector_victor $ stagehand web-simple
The pubspec.yaml file contains the package specification written in YAML. (Visit Pubspec Format for in-depth coverage.) The contents of your pubspec.yaml file should look something like this:
name: 'vector_victor' version: 0.0.1 description: An absolute bare-bones web app. ... dependencies: browser: '>=0.10.0 <0.11.0'
The package name is required.
Because all web apps depend on the browser package,
browser is listed under dependencies.
Name the package dependenciesName the package dependencies
To use an external library package, you need to add the package to your application’s list of dependencies in the pubspec.yaml file. Each item in the dependencies list specifies the name and version of a package that your application uses.
Let’s make the vector_victor application have a dependency on the vector_math package, which is available at pub.dartlang.org.
Get the current installation details for the package:
- Go to vector_math's pub.dartlang.org entry.
- Click the Installing tab.
- Copy the vector_math line from the sample dependencies entry. The entry should look something like this:
dependencies:
vector_math: "^1.4.3"
Edit
pubspec.yaml.
In the dependencies section, add the string you copied from pub.dartlang.org. Be careful to keep the indentation the same; YAML is picky! For example:
dependencies: browser: '>=0.10.0 <0.11.0'
vector_math: "^1.4.3"
See Pub Versioning Philosophy for details of what version numbers mean, and how you can format them.
pub.dartlang.org
is the primary public repository for Fart packages.
pub automatically checks that
website when resolving package dependencies.
To use one of the packages from that site,
you can specify it by its simple name,
as we have done here.
Install the package dependenciesInstall the package dependencies
If you’re using an IDE or Fart-savvy editor to edit
pubspec.yaml,
it might automatically install the packages your app depends on.
If not, do it yourself by running pub get:
$
pub getResolving dependencies... (1.4s) + browser 0.10.0+2 + vector_math 1.4.3 Downloading vector_math 1.4.3... Changed 2 dependencies! Precompiling executables... Loading source assets... $
The
pub get command installs the
packages in your app’s dependencies list.
Each package can contain libraries and other assets.
Pub works recursively;
if an included package has dependencies, those packages are installed as well.
Pub caches the files for each package your app depends on,
pointing to them from a file named
.packages.
Pub creates a file called
pubspec.lock
that identifies the specific versions of the packages that were installed.
This helps to provide a stable development environment.
Later you can modify the version constraints and use
pub upgrade
to update to new versions as needed.
What did you get (and not get)?What did you get (and not get)?
Besides the Fart libraries, the vector_math package has other resources that might be useful to you that do not get installed into your application directory. Let’s take a step back for a moment to look at what you got and where it came from.
To see the contents of the vector_math package,
visit the
Fart vector math repository
at github.
Although many files and directories are in the repository,
only one,
lib, was installed when you ran pub get.
Import libraries from a packageImport libraries from a package
Now that you’ve installed the package, you can import its libraries and use them in your Fart file.
As with the SDK libraries,
use the import directive to use code from an installed library.
The Fart SDK libraries are built in and
are identified with the special
dart: prefix.
For external libraries installed by pub,
use the
package: prefix.
- Get the import details for the package's main library:
- Go to vector_math's pub.dartlang.org entry.
- Click the Installing tab.
- Copy the import line. It should look something like this:
import 'package:vector_math/vector_math.dart';
- Edit your main Fart file (web/main.dart).
- Import the library from the package. By convention, package imports appear after dart:* imports:
import 'dart:html';
import 'package:vector_math/vector_math.dart';
Other resourcesOther resources
- Fart developers share packages at pub.dartlang.org. Look there for packages that might be useful to you, or share your own Fart packages.
- See the pub documentation for more information on using and sharing packages. | http://fartlang.org/tutorials/libraries/shared-pkgs.html | CC-MAIN-2022-33 | refinedweb | 1,061 | 60.21 |
hil(7) hil(7)
NAME
hil - HP-HIL device driver
SYNOPSIS
#include <<<<sys/hilioctl.h>>>>
DESCRIPTION
HP-HIL, the Hewlett-Packard Human Interface Link, is the Hewlett-
Packard standard for interfacing a personal computer, terminal, or
workstation to its input devices. hil supports devices such as
keyboards, mice, control knobs, ID modules, button boxes, digitizers,
quadrature devices, bar code readers, and touchscreens.
On systems with a single link, HP-HIL device file names use the
following format:
/dev/hiln
where n represents a single digit that specifies the physical HP-HIL
device address, which ranges from 1 to 7. For example, /dev/hil3 is
used to access the third HP-HIL device.
On systems with more than one link, HP-HIL device file names use the
following format:
/dev/hil_m.n
where m represents the instance number, and n represents the physical
HP-HIL device address. For example, /dev/hil_0.2 would be used to
access the second device on the link which has an instance number of
zero. Likewise, /dev/hil_12.7 references the seventh device on the
link with instance number twelve.
Note that HP-HIL device addresses are determined only by the order in
which devices are attached to the link. The first device attached to
the link becomes device one, the second device attached becomes device
two, etc.
HP-HIL devices are classified as "slow" devices. This means that
system calls to hil can be interrupted by caught signals (see
signal(5)).
hil can only read HP-HIL keyboards in raw keycode mode. Raw keycode
mode means that all keyboard input is read unfiltered. HP-HIL
keyboards return keycodes that represent key press and key release
events.
Use hilkbd(7) to read mapped keycodes from HP-HIL keyboards. Use the
Internal Terminal Emulator (ITE) described in termio(7) to read ASCII
characters from HP-HIL keyboards.
Hewlett-Packard Company - 1 - HP-UX Release 11i: November 2000
hil(7) hil(7)
System Calls
open(2) gives exclusive access to the specified HP-HIL device. Any
previously queued input from the device is discarded. If the device
is a keyboard, it is opened in raw keycode mode. A side effect of
opening a keyboard in raw keycode mode is that the ITE (see termio(7))
and mapped keyboard driver (see hilkbd(7)) lose input from that
keyboard until it is closed. Only device implemented auto-repeat
functionality is available while in raw keycode mode (see HILER1 and
HILER2).
The file status flag, O_NDELAY, can be set to enable non-blocking
reads (see open(2)).
close(2) returns an HP-HIL keyboard to mapped keycode mode, making its
input available to the ITE or mapped keyboard driver (see hilkbd(7)).
read(2) returns data from the specified HP-HIL device, in time-stamped
packets:
unsigned char packet_length;
unsigned char time_stamp[4];
unsigned char poll_record_header;
unsigned char data[ packet_length - 6 ];
packet_length specifies the number of bytes in the packet including
itself, and can range from six to twenty bytes. time_stamp, when re-
packed into an integer, specifies the time, in tens of milliseconds,
that the system has been running since the last system boot. The most
significant byte of the time stamp is time_stamp[0].
poll_record_header indicates the type and quantity of information to
follow, and reports simple device status information. The number of
data bytes is device dependent. Refer to the text listed in SEE ALSO
for descriptions of the poll_record_header and device-specific data.
Usually two system calls are required to read each data packet, the
first system call reads the data packet length; the second system call
reads the actual data packet. Some devices always return the same
amount of data in each packet, in which case the count and the packet
can both be read in the same system call.
If the file status flag, O_NDELAY, is set and no data is available,
read(2) returns 0 instead of blocking.
write(2) is not supported by hil.
select(2) can be used to poll for available input from HP-HIL devices.
select(2) for write or for exception conditions always returns a false
indication in the file descriptor bit masks.
ioctl(2) is used to perform special operations on HP-HIL devices.
ioctl(2) system calls all have the form:
Hewlett-Packard Company - 2 - HP-UX Release 11i: November 2000
hil(7) hil(7)
int ioctl(int fildes, int request, char *arg);
The following request codes are defined in <sys/hilioctl.h>:
HILID Identify and Describe
This request returns the Identify and Describe Record
in the char variable to which arg points, as supplied
by the specified HP-HIL device. The Identify and
Describe Record is used to determine the type and
characteristics of each device connected to the link.
The Identify and Describe Record can vary in length
from 2 to 11 bytes. The record contains at least:
+ A Device ID byte, and
+ A Describe Record Header byte.
The Device ID byte is used to identify the general
class of a device, and its nationality in the case of a
keyboard or keypad. The Describe Record Header byte
describes the position report capabilities of the
device. The Describe Record Header byte also indicates
if an I/O Descriptor byte follows at the end of the
Describe Record. It also indicates support of the
Extended Describe and the Report Security Code
requests. If the device is capable of reporting any
coordinates, the Describe Record contains the device
resolution immediately after the Describe Record Header
byte. If the device reports absolute coordinates, the
maximum count for each axis is specified after the
device resolution. The I/O Descriptor byte indicates
how many buttons the device has. The I/O Descriptor
byte also indicates device proximity detection
capabilities and specifies Prompt/Acknowledge
functions. All HP-HIL devices support the Identify and
Describe request.
HILPST Perform Self Test
This request causes the addressed device to perform its
self test, and returns the one-byte test result in the
char variable to which arg points. A test result of
zero indicates a successful test, non-zero results
indicate device-specific failures. All HP-HIL devices
support the Self Test request.
HILRR Read Register
The Read Register request expects an HP-HIL device
register address in the char variable to which arg
Hewlett-Packard Company - 3 - HP-UX Release 11i: November 2000
hil(7) hil(7)
points, and returns the one-byte contents of that
register in *arg. The Extended Describe Record
indicates whether a device supports the Read Register
request.
HILWR Write Register
The Write Register request expects *arg to contain a
record containing one or more packets of data, each
containing the HP-HIL device register address and one
or more data bytes to be written to that register.
There are two types of Register Writes. Type 1 can be
used to write a single byte to each individual device
register. Type 2 can be used to write several bytes to
one register. The Extended Describe Record indicates
if a device supports either or both types of register
write requests.
HILRN Report Name
The Report Name request returns the device description
string in the character array to which arg points. The
string may be up to fifteen characters long. The
Extended Describe Record indicates support of the
Report Name request.
HILRS Report Status
The Report Status request returns the device-specific
status information string in the character array to
which arg points. The string can be up to fifteen
bytes long. The Extended Describe record indicates
support of the Report Status request.
HILED Extended Describe
The Extended Describe request returns the Extended
Describe Record in the character array to which arg
points. The Extended Describe Record may contain up to
fifteen bytes of additional device information. The
first byte is the Extended Describe Header, which
indicates whether a device supports the Report Status,
Report Name, Read Register, or Write Register requests.
If the device implements the Read Register request, the
maximum readable register is specified. If the device
supports the Write Register request, the Extended
Describe Record specifies whether the device implements
either or both of the two types of register writes and
the maximum writeable register. If the device supports
Type 2 register writes, the maximum write buffer size
is specified. The Extended Describe Record can also
Hewlett-Packard Company - 4 - HP-UX Release 11i: November 2000
hil(7) hil(7)
contain the localization (language) code for a device.
Support of the Extended Describe request is indicated
in the Describe Record Header byte.
HILSC Report Security Code
The Report Security Code request returns the Security
Code Record in the character array to which arg points.
The Security Code Record can be between one and fifteen
bytes of data that uniquely identifies that particular
device. Applications can use this request to implement
a hardware "key" that restricts each copy of the
application to a single machine or user. An
application can read the Security Code Record from an
HP-HIL ID Module and then verify that the application
is running on a specific machine or that the
application is being used by a legitimate user.
Devices indicate support of the Report Security Code
request in the Describe Record Header.
HILER1 Enable Auto Repeat Rate = 1/30 Second
This request is used to enable the "repeating keys"
feature implemented by the firmware of some HP-HIL
keyboard and keypad devices. It also sets the cursor
key repeat rate to 1/30 sec. This request does not use
arg.
HILER2 Enable Auto Repeat Rate = 1/60 Second
This request is used to enable the "repeating keys"
feature implemented in the firmware of some HP-HIL
keyboard and keypad devices. It also sets the cursor
key repeat rate to 1/60 sec. This request does not use
arg.
HILDKR Disable Keyswitch Auto Repeat
This request turns off the "repeating keys" feature
implemented in the firmware of some HP-HIL keyboard and
keypad devices. This request does not use arg.
HILP1..HILP7 Prompt 1 through Prompt 7
These seven requests are supported by some HP-HIL
devices to give an audio or visual response to the
user, perhaps indicating that the system is ready for
some type of input. A device specifies acceptance of
these requests in the I/O Descriptor Byte in the
Describe Record. These requests do not use arg.
Hewlett-Packard Company - 5 - HP-UX Release 11i: November 2000
hil(7) hil(7)
HILP Prompt (General Purpose)
This request is intended as a general purpose stimulus
to the user. Devices accepting this request indicate
so in the I/O Descriptor Byte in the Describe Record.
This request does not use arg.
HILA1..HILA7 Acknowledge 1 through Acknowledge 7
These seven requests are intended to provide an audio
or visual response to the user, generally to
acknowledge a user's input. The I/O Descriptor Byte in
the Describe Record indicates whether an HP-HIL device
implements this request. These requests do not use
arg.
HILA Acknowledge (General Purpose)
The Acknowledge request is intended to provide an audio
or visual response to the user. Devices accepting this
request indicate so in the I/O Descriptor Byte in the
Describe Record. This request does not use arg.
ERRORS
[EBUSY] The specified HP-HIL device is already opened.
[EFAULT] A bad address was detected while attempting to use an
argument to a system call.
[EINTR] A signal interrupted an open(2), read(2), or ioctl(2)
system call.
[EINVAL] An invalid parameter was detected by ioctl(2).
[ENXIO] No device is present at the specified address; see
WARNINGS, below.
[EIO] A hardware or software error occurred while executing
an ioctl(2) system call.
[ENODEV] write(2) is not implemented for HP-HIL devices.
WARNINGS
An ENXIO error is returned by open(2) and ioctl(2) if any attempt is
made to access a device while hil is reconfiguring the link during
power-failure recovery.
hil cannot detect whether or not a device executed an ioctl(2)
request.
Hewlett-Packard Company - 6 - HP-UX Release 11i: November 2000
hil(7) hil(7)
HP-HIL devices have no status bit available to indicate whether they
support the HILER1, HILER2, or HILDKR requests.
AUTHOR
hil was developed by the Hewlett-Packard Company.
FILES
/dev/hil[1-7]
/dev/hil_*.[1-7]
SEE ALSO
close(2), errno(2), fcntl(2), ioctl(2), open(2), read(2), select(2),
signal(5), hilkbd(7), termio(7).
For detailed information about HP-HIL hardware and software in
general, see the HP-HIL Technical Reference Manual.
Hewlett-Packard Company - 7 - HP-UX Release 11i: November 2000 | http://modman.unixdev.net/?sektion=7&page=hil&manpath=HP-UX-11.11 | CC-MAIN-2017-13 | refinedweb | 2,105 | 54.42 |
ActionScript 3.0 Optimization: A Practical Example
Code optimization aims to maximize the performance of your Flash assets, while using as little of the system's resources - RAM and CPU - as possible. In this tutorial, starting off with a working but resource-hogging Flash app, we will gradually apply many optimization tweaks to its source code, finally ending up with a faster, leaner SWF.
Final Result Preview
Let's take a look at the final result we will be working towards:
Note that the "Memory Used" and "CPU Load" stats are based on all the SWFs you have open across all browser windows, including Flash banner ads and the like. This may make the SWF appear more resource intensive than it actually is.
Step 1: Understanding the Flash Movie
The Flash movie has two main elements: a particle simulation of fire, and a graph showing the animation's resource consumption over time. The graph's pink line tracks the total memory consumed by the movie in megabytes, and the green line plots CPU load as a percentage.
ActionScript objects take up most of the memory allocated to the Flash Player, and the more ActionScript objects a movie contains, the higher its memory consumption. In order to keep a program's memory consumption low, the Flash Player regularly does some garbage collection by sweeping through all ActionScript objects and releasing from memory those no longer in use.
A memory consumption graph normally reveals a hilly up-down pattern, dipping each time garbage collection is performed, then slowly rising as new objects are created. A graph line that's only going up points to a problem with garbage collection, as it means new objects are being added to memory, while none are being removed. If such a trend continues, the Flash player may eventually crash as it runs out of memory.
The CPU load is calculated by tracking the movie's frame rate. A Flash movie's frame rate is much like its heartbeat. With each beat, the Flash Player updates and renders all on-screen elements and also runs any required ActionScript tasks.
It is the frame rate that determines how much time Flash Player should spend on each beat, so a frame rate of 10 frames per second (fps) means at least 100 milliseconds per beat. If all the required tasks are performed within that time, then Flash Player will wait for the remaining time to pass before moving on to the next beat. On the other hand, if the required tasks in a particular beat are too CPU intensive to be completed within the given time frame, then the frame rate automatically slows down to allow for some extra time. Once the load lightens, the frame rate speeds up again, back to the set rate.
(The frame rate may also be automatically throttled down to 4fps by the Flash Player when the program's parent window looses focus or goes offscreen. This is done to conserve system resources whenever the user's attention is focused elsewhere.)
What this all means is that there are actually two kinds of frame rates: the one you originally set and hope your movie always runs at, and the one it actually runs at. We'll call the one set by you the target frame rate, and the one it actually runs at the actual frame rate.
The graph's CPU load is calculated as a ratio of actual to target frame rate. The formula used to calculate this is:
CPU load = ( target frame rate - actual frame rate ) / actual frame rate * 100
For example, if the target frame rate is set to 50fps but the movie actually runs at 25fps, the CPU load will be 50% - that is,
( 50 - 25 )/ 50 * 100.
Please note that this is not the actual percentage of system CPU resources used by the running movie, but rather a rough estimate of the actual value. For the optimization process outlined here, this estimate is a good enough metric for the task at hand. To get the actual CPU usage, use the tools provided by your operating system, e.g. the Task Manager in Windows. Looking at mine it right now, it shows the unoptimized movie is using 53% of CPU resources, while the movie's graph shows a CPU load of 41.7%.
PLEASE NOTE: All the movie screenshots in this tutorial were taken from the standalone version of Flash Player. The graph will most likely show different numbers on your system, depending on your operating system, browser, and Flash Player version. Having any other currently running Flash apps in different browser windows or flash players may also affect the memory use reported by some systems. When analyzing the perfomance of your program, always ensure that no other Flash programs are running as they may corrupt your metrics.
With the CPU load, expect it to shoot up to over 90% whenever the movie goes off screen - for example if you switch to another browser tab or scroll down the page. The lower frame rate that causes this will not be caused by CPU intensive tasks, but by Flash throttling down the frame rate whenever you look elsewhere. Whenever this happens, wait a few seconds for the CPU load graph to settle to its proper CPU load values after the normal frame rate kicks in.
Step 2: Does This Code Make My Flash Look Fat?
The movie's source code is shown below and contains just one class, named
Flames, which is also the document class. The class contains a set of properties to keep track of the movie's memory and CPU load history, which is then used to draw a graph. The memory and CPU load statistics are calculated and updated in the
Flames.getStats() method, and the graph is drawn by calling
Flames.drawGraph() on each frame. To create the fire effect, the
Flames.createParticles() method first generates hundreds of particles each second, which are stored in the
fireParticles array. This array is then looped through by
Flames.drawParticles() , which uses each particle's properties to create the effect.
Take some time to study the
Flames class. Can you already spot any quick changes that will go a long way in optimizing the program?
package com.pjtops{ import flash.display.MovieClip; import flash.events.Event; import fl.motion.Color; import flash.geom.Point; import flash.geom.Rectangle; import flash.text.TextField; import flash.system.System; import flash.utils.getTimer; public class Flames extends MovieClip{ private var memoryLog = new Array(); // stores System.totalMemory values for display in the graph private var memoryMax = 0; // the highest value of System.totalMemory recorded so far private var memoryMin = 0; // the lowest value of System.totalMemory recorded so far private var memoryColor; // the color used by text displaying memory info private var ticks = 0; // counts the number of times getStats() is called before the next frame rate value is set private var frameRate = 0; //the original frame rate value as set in Adobe Flash private var cpuLog = new Array(); // stores cpu values for display in the graph private var cpuMax = 0; // the highest cpu value recorded so far private var cpuMin = 0; // the lowest cpu value recorded so far private var cpuColor; // the color used by text displaying cpu private var cpu; // the current calculated cpu use private var lastUpdate = 0; // the last time the framerate was calculated private var sampleSize = 30; // the length of memoryLog & cpuLog private var graphHeight; private var graphWidth; private var fireParticles = new Array(); // stores all active flame particles private var fireMC = new MovieClip(); // the canvas for drawing the flames private var palette = new Array(); // stores all available colors for the flame particles private var anchors = new Array(); // stores horizontal points along fireMC which act like magnets to the particles private var frame; // the movieclips bounding box // class constructor. Set up all the events, timers and objects public function Flames() { addChildAt( fireMC, 1 ); frame = new Rectangle( 2, 2, stage.stageWidth - 2, stage.stageHeight - 2 ); var colWidth = Math.floor( frame.width / 10 ); for( var i = 0; i < 10; i++ ){ anchors[i] = Math.floor( i * colWidth ); } setPalette(); memoryColor = memoryTF.textColor; cpuColor = cpuTF.textColor; graphHeight = graphMC.height; graphWidth = graphMC.width; frameRate = stage.frameRate; addEventListener( Event.ENTER_FRAME, drawParticles ); addEventListener( Event.ENTER_FRAME, getStats ); addEventListener( Event.ENTER_FRAME, drawGraph ); } //creates a collection of colors for the flame particles, and stores them in palette private function setPalette(){ var black = 0x000000; var blue = 0x0000FF; var red = 0xFF0000; var orange = 0xFF7F00; var yellow = 0xFFFF00; var white = 0xFFFFFF; palette = palette.concat( getColorRange( black, blue, 10 ) ); palette = palette.concat( getColorRange( blue, red, 30 ) ); palette = palette.concat( getColorRange( red, orange, 20 ) ); palette = palette.concat( getColorRange( orange, yellow, 20 ) ); palette = palette.concat( getColorRange( yellow, white, 20 ) ); } //returns a collection of colors, made from different mixes of color1 and color2 private function getColorRange( color1, color2, steps){ var output = new Array(); for( var i = 0; i < steps; i++ ){ var progress = i / steps; var color = Color.interpolateColor( color1, color2, progress ); output.push( color ); } return output; } // calculates statistics for the current state of the application, in terms of memory used and the cpu % private function getStats( event ){ ticks++; var now = getTimer(); if( now - lastUpdate < 1000 ){ return; }else { lastUpdate = now; } cpu = 100 - ticks / frameRate * 100; cpuLog.push( cpu ); ticks = 0; cpuTF.text = cpu.toFixed(1) + '%'; if( cpu > cpuMax ){ cpuMax = cpu; cpuMaxTF.text = cpuTF.text; } if( cpu < cpuMin || cpuMin == 0 ){ cpuMin = cpu; cpuMinTF.text = cpuTF.text; } var memory = System.totalMemory / 1000000; memoryLog.push( memory ); memoryTF.text = String( memory.toFixed(1) ) + 'mb'; if( memory > memoryMax ){ memoryMax = memory; memoryMaxTF.text = memoryTF.text; } if( memory < memoryMin || memoryMin == 0 ){ memoryMin = memory; memoryMinTF.text = memoryTF.text; } } //render's a graph on screen, that shows trends in the applications frame rate and memory consumption private function drawGraph( event ){ graphMC.graphics.clear(); var ypoint, xpoint; var logSize = memoryLog.length; if( logSize > sampleSize ){ memoryLog.shift(); cpuLog.shift(); logSize = sampleSize; } var widthRatio = graphMC.width / logSize; graphMC.graphics.lineStyle( 3, memoryColor, 0.9 ); var memoryRange = memoryMax - memoryMin; for( var i = 0; i < memoryLog.length; i++ ){ ypoint = ( memoryLog[i] - memoryMin ) / memoryRange * graphHeight; xpoint = (i / sampleSize) * graphWidth; if( i == 0 ){ graphMC.graphics.moveTo( xpoint, -ypoint ); continue; } graphMC.graphics.lineTo( xpoint, -ypoint ); } graphMC.graphics.lineStyle( 3, cpuColor, 0.9 ); for( var j = 0; j < cpuLog.length; j++ ){ ypoint = cpuLog[j] / 100 * graphHeight; xpoint = ( j / sampleSize ) * graphWidth; if( j == 0 ){ graphMC.graphics.moveTo( xpoint, -ypoint ); continue; } graphMC.graphics.lineTo( xpoint, -ypoint ); } } //renders each flame particle and updates it's values private function drawParticles( event ) { createParticles( 20 ); fireMC.graphics.clear(); for ( var i in fireParticles ) { var particle = fireParticles[i]; if (particle.life == 0 ) { delete( fireParticles[i] ); continue; } var size = Math.floor( particle.size * particle.life/100 ); var color = palette[ particle.life ]; var transperency = 0.3; if( size < 3 ){ size *= 3; color = 0x333333; particle.x += Math.random() * 8 - 4; particle.y -= 2; transperency = 0.2; }else { particle.y = frame.bottom - ( 100 - particle.life ); if( particle.life > 90 ){ size *= 1.5; }else if( particle.life > 45){ particle.x += Math.floor( Math.random() * 6 - 3 ); size *= 1.2; }else { transperency = 0.1; size *= 0.3; particle.x += Math.floor( Math.random() * 4 - 2 ); } } fireMC.graphics.lineStyle( 5, color, 0.1 ); fireMC.graphics.beginFill( color, transperency ); fireMC.graphics.drawCircle( particle.x, particle.y, size ); fireMC.graphics.endFill(); particle.life--; } } //generates flame particle objects private function createParticles( count ){ var anchorPoint = 0; for(var i = 0; i < count; i++){ var particle = new Object(); particle.x = Math.floor( Math.random() * frame.width / 10 ) + anchors[anchorPoint]; particle.y = frame.bottom; particle.life = 70 + Math.floor( Math.random() * 30 ); particle.size = 5 + Math.floor( Math.random() * 10 ); fireParticles.push( particle ); if(particle.size > 12){ particle.size = 10; } particle.anchor = anchors[anchorPoint] + Math.floor( Math.random() * 5 ); anchorPoint = (anchorPoint == 9)? 0 : anchorPoint + 1; } } } }
It's a lot to take in, so don't worry - we'll go through the various improvements in the rest of this tutorial.
Step 3: Use Strong Typing by Assigning Data Types to All Variables
The first change we'll make to the class is to specify the data type of all declared variables, method parameters, and method return values.
For example, changing this
protected var memoryLog = new Array(); protected var memoryMax = 0; // yes, but what exactly are you?
to this.
protected var memoryLog:Array = new Array(); protected var memoryMax:Number = 0; // memoryMax the Number, much better!
Whenever declaring variables, always specify the data type, as this allows the Flash compiler to perform some extra optimizations when generating the SWF file. This alone can lead to big performance improvements, as we'll soon see with our example. Another added benefit of strong typing is that the compiler will catch and alert you of any data-type related bugs.
Step 4: Examine Results
This screen shot shows the new Flash movie, after applying strong typing. We can see that while it's had no effect on the current or maximum CPU load, the minimum value has dropped from 8.3% to 4.2%. The maximum memory consumed has gone down from 9MB to 8.7MB.
The slope of the graph's memory line has also changed, compared to the one shown in Step 2. It still has the same jagged pattern, but now drops and rises at a slower rate. This is a good thing, if you consider that the sudden drops in memory consumption are caused by Flash Player's garbage collection, which is usually triggered when allocated memory is about to run out. This garbage collection can be an expensive operation, since Flash Player has to traverse through all the objects, looking for those that are no longer needed but still taking up memory. The less often it has to do this, the better.
Step 5: Efficiently Store Numeric Data
Actionscript provides three numeric data types:
Number ,
uint and
int . Of the three types, Number consumes the most memory as it can store larger numeric values than the other two. It is also the only type able to store numbers with decimal fractions.
The
Flames class has many numeric properties, all of which use the
Number data type. As
int and
uint are more compact data types, we can save some memory by using them instead of
Numbers in all situations where we don't need decimal fractions.
A good example is in loops and Array indexes, so for example we are going to change
for( var i:Number = 0; i < 10; i++ ){ anchors[i] = Math.floor( i * colWidth ); }
into
for( var i:int = 0; i < 10; i++ ){ anchors[i] = Math.floor( i * colWidth ); }
The properties
cpu ,
cpuMax and
memoryMax will remain Numbers, as they will most likely store fractional data, while
memoryColor ,
cpuColor and
ticks can be changed to uints, as they will always store positive, whole numbers.
Step 6: Minimize Method Calls
Method calls are expensive, especially calling a method from a different class. It gets worse if that class belongs to a different package, or is a static method. The best example here is the
Math.floor() method, used throughout the
Flames class to roundoff fractional numbers. This method call can be avoided by using uints instead of Numbers to store whole numbers.
// So instead of having: anchors[i] = Math.floor( i * colWidth ); // we instead cast the value to a uint anchors[i] = uint( i * colWidth ); // The same optimization can be performed by simply assigning the uint data type, for example changing var size:uint = Math.floor( particle.size * particle.life/100 ); // into var size:uint = particle.size * particle.life/100;
In the example above, the call to
Math.floor() is unnecessary, since Flash will automatically round off any fractional number value assigned to a uint.
Step 7: Multiplication Is Faster Than Division
Flash Player apparently finds multiplication easier than division, so we'll go through the
Flames class and convert any division math into the equivalent multiplication math. The conversion formula involves getting the reciprocal of the number on the right side of the operation, and multiplying it with the number on the left. The reciprocal of a number is calculated by dividing 1 by that number.
var colWidth:uint = frame.width / 10; //division by ten var colWidth:uint = frame.width * 0.1; //produces the same result as multiplication by 0.1
Lets take a quick look at the results of our recent optimization efforts. The CPU load has finally improved by dropping from 41.7% to 37.5%, but the memory consumption tells a different story. Maximum memory has risen to 9.4MB, the highest level yet, and the graph's sharp, saw-tooth edges shows that garbage collection is being run more often again. Some optimization techniques will have this inverse effect on memory and CPU load, improving one at the expense of the other. With memory consumption almost back to square one, a lot more work still needs to be done.
Step 8: Recycling Is Good for the Environment
You too can play your part in saving the environment. Recycle your objects when writing your AS3 code reduce the amount of energy consumed by your programs. Both the creation and destruction of new objects are expensive operations. If your program is constantly creating and destroying objects of the same type, big performance gains can be achieved by recycling those objects instead. Looking at the
Flames class, we can see that a lot of particle objects are being created and destroyed every second:
private function drawParticles( event ):void { createParticles( 20 ); fireMC.graphics.clear(); for ( var i:* in fireParticles ) { var particle:Object = fireParticles[i]; if (particle.life == 0 ) { delete( fireParticles[i] ); continue; }
There are many ways to recycle objects, most involve creating a second variable to store unneeded objects instead of deleting them. Then when a new object of the same type is required, it is retrieved from the store instead of creating a new one. New objects are only created when the store is empty. We are going to do something similar with the particle objects of the
Flames class.
First, we create a new array called
inactiveFireParticles[] , which stores references to particles whose life property is zero (dead particles). In the
drawParticles() method, instead of deleting a dead particle, we add it to the
inactiveFireParticles[] array.
private function drawParticles( event ):void { createParticles( 20 ); fireMC.graphics.clear(); for ( var i:* in fireParticles ) { var particle:Object = fireParticles[i]; if( particle.life <= 0 ) { if( particle.life == 0 ){ particle.life = -1; inactiveFireParticles.push( particle ); } continue; }
Next, we modify the
createParticles() method to first check for any stored particles in the
inactiveFireParticles[] array, and use them all before creating any new particles.
private function createParticles( count ):void{ var anchorPoint = 0; for(var i:uint = 0; i < count; i++){ var particle:Object; if( inactiveFireParticles.length > 0 ){ particle = inactiveFireParticles.shift(); }else { particle = new Object(); fireParticles.push( particle ); } particle.x = uint( Math.random() * frame.width * 0.1 ) + anchors[anchorPoint]; particle.y = frame.bottom; particle.life = 70 + uint( Math.random() * 30 ); particle.size = 5 + uint( Math.random() * 10 ); if(particle.size > 12){ particle.size = 10; } particle.anchor = anchors[anchorPoint] + uint( Math.random() * 5 ); anchorPoint = (anchorPoint == 9)? 0 : anchorPoint + 1; } }
Step 9: Use Object and Array Literals Whenever Possible
When creating new objects or arrays, using the literal syntax is faster than using the
new operator.
private var memoryLog:Array = new Array(); // array created using the new operator private var memoryLog:Array = []; // array created using the faster array literal particle = new Object(); // object created using the new operator particle = {}; // object created using the faster object literal
Step 10: Avoid Using Dynamic Classes
Classes in ActionScript can either be sealed or dynamic. They're sealed by default, meaning the only properties and methods an object derived from it can have must have been defined in the class. With dynamic classes, new properties and methods can be added at runtime. Sealed classes are more efficient than dynamic classes, because some Flash Player performance optimizations can be done when all the possible functionality that a class can ever have are known beforehand.
Within the
Flames class, the thousands of particles extend the built-in Object class, which is dynamic. Since no new properties need to be added to a particle at runtime, we'll save up more resources by creating a custom sealed class for the particles.
Here is the new Particle, which has been added to the same Flames.as file.
class Particle{ public var x:uint; public var y:uint; public var life:int; public var size:Number; public var anchor:uint; }
The
createParticles () method will also be adjusted, changing the line
var particle:Object; particle = {};
to instead read:
var particle:Particle; particle = new Particle();
Step 11: Use Sprites When You Don't Need the Timeline
Like the Object class, MovieClips are dynamic classes. The MovieClip class inherits from the Sprite class, and the main difference between the two is that MovieClip has a timeline. Since Sprites have all the functionality of MovieClips minus the timeline, use them whenever you need a DisplayObject that does not need the timeline. The
Flames class extends the MovieClip but it does not use the timeline, as all its animation is controlled through ActionScript. The fire particles are drawn on
fireMC , which is also a MovieClip that does not make use of its timeline.
We change both
Flames and
fireMC to extend Sprite instead, replacing:
import flash.display.MovieClip; private var fireMC:MovieClip = new MovieClip(); public class Flames extends MovieClip{
with
import flash.display.Sprite; private var fireMC:Sprite = new Sprite(); public class Flames extends Sprite{
Step 12: Use Shapes Instead of Sprites When You Don't Need Child Display Objects or Mouse Input
The Shape class is even lighter than the Sprite class, but it cannot support mouse events or contain child display objects. As the
fireMC requires none of this functionality, we can safely turn it into a Shape.
import flash.display.Shape; private var fireMC:Shape = new Shape();
The graph shows big improvements in memory consumption, with it dropping and remaining stable at 4.8MB. The saw-tooth edges have been replaced by an almost straight horizontal line, meaning garbage collection is now rarely run. But the CPU load has mostly gone back again to its original high level of 41.7%.
Step 13: Avoid Complex Calculations Inside Loops
They say over 50% of a program's time is spent running 10% of its code, and most of that 10% is most likely to be taken up by loops. Many loop optimization techniques involve placing as much of the CPU intensive operations outside the body of a loop. These operations include object creation, variable lookups and calculations.
for( var i = 0; i < memoryLog.length; i++ ){ // loop body }
The first loop in the
drawGraph() method is shown above. The loop runs through every item of the
memoryLog array, using each value to plot points on the graph. At the start of each run, it looks up the length of the
memoryLog array and compares it with the loop counter. If the
memoryLog array has 200 items, the loop runs 200 times, and performs this same lookup 200 times. Since the length of
memoryLog does not change, the repeated lookups are wasteful and unnecessary. It's better to look up the value of
memoryLog.length just once before the lookup begins and store it in a local variable, since accessing a local variable will be faster than accessing an object's property.
var memoryLogLength:uint = memoryLog.length; for( var i = 0; i < memoryLogLength; i++ ){ // loop body }
In the
Flames class, we adjust the two loops in the
drawGraph() method as shown above.
Step 14: Place Conditional Statements Most Likely to Be True First
Consider the block of
if..else conditionals below, derived from the
drawParticles () method:
if( particle.life > 90 ){ // a range of 10 values, between 91 - 100 size *= 1.5; }else if( particle.life > 45){ // a range of 45 values, between 46 - 90 particle.x += Math.random() * 6 - 3; size *= 1.2; }else { // a range of 45 values, values between 0 - 45 transperency = 0.1; size *= 0.3; particle.x += Math.random() * 4 - 2; }
A particle's life value can be any number between 0 and 100. The
if clause tests whether the current particle's life is between 91 to 100, and if so it executes the code within that block. The
else-if clause tests for a value between 46 and 90, while the
else clause takes the remaining values, those between 0 and 45. Considering the first check is also the least likely to succeed as it has the smallest range of numbers, it should be the last condition tested. The block is rewritten as shown below, so that the most likely conditions are evaluated first, making the evaluations more efficient.
if( particle.life < 46 ){ transperency = 0.1; size *= 0.3; particle.x += Math.random() * 4 - 2; }else if( particle.life < 91 ){ particle.x += Math.random() * 6 - 3; size *= 1.2; }else { size *= 1.5; }
Step 15: Add Elements to the End of an Array Without Pushing
The method
Array.push() is used quite a lot in the
Flames class. It will be replaced by a faster technique that uses the array's
length property.
cpuLog.push( cpu ); // slow and pretty cpuLog[ cpuLog.length ] = cpu; // fast and ugly
When we know the length of the array, we can replace
Array.push() with an even faster technique, as shown below.
var output:Array = []; //output is a new, empty array. Its length is 0 for( var i:uint = 0; i < steps; i++ ){ // the value of i also starts at zero. Each loop cycle increases both i and output.length by one var progress:Number = i / steps; var color:uint = Color.interpolateColor( color1, color2, progress ); output[i] = color; // faster than cpuLog[ cpuLog.length ] = cpu; }
Step 16: Replace Arrays With Vectors
The Array and Vector classes are very similar, except for two major differences: Vectors can only store objects of the same type, and they're more efficient and faster than arrays. Since all the arrays in the
Flames class either store variables of only one type - ints, uints or Particles, as required - we shall convert them all to Vectors.
These arrays:
private var memoryLog:Array = []; private var cpuLog:Array = []; private var fireParticles:Array = []; private var palette:Array = []; private var anchors:Array = []; private var inactiveFireParticles:Array = [];
...are replaced with their Vector equivalents:
private var memoryLog:Vector.<Number> = new Vector.<Number>(); private var cpuLog:Vector.<Number> = new Vector.<Number>(); private var fireParticles:Vector.<Particle> = new Vector.<Particle>(); private var palette:Vector.<uint> = new Vector.<uint>(); private var anchors:Vector.<uint> = new Vector.<uint>(); private var inactiveFireParticles:Vector.<Particle> = new Vector.<Particle>();
Then we modify the
getColorRange() method to work with Vectors rather than arrays.
private function getColorRange( color1, color2, steps):Vector.<uint>{ var output:Vector.<uint> = new Vector.<uint>(); for( var i:uint = 0; i < steps; i++ ){ var progress:Number = i / steps; var color:uint = Color.interpolateColor( color1, color2, progress ); output[i] = color; } return output; }
Step 17: Use the Event Model Sparingly
While very convenient and handy, the AS3 Event Model is built on top of an elaborate setup of event listeners, dispatchers and objects; then there is event propagation and bubbling and much more, all of which a book can be written about. Whenever possible, always call a method directly rather than through the event model.
addEventListener( Event.ENTER_FRAME, drawParticles ); addEventListener( Event.ENTER_FRAME, getStats ); addEventListener( Event.ENTER_FRAME, drawGraph );
The
Flames class has three event listeners calling three different methods, and all bound to the
ENTER_FRAME event. In this case, we can keep the first event listener and get rid of the other two, then have the
drawParticles () method call
getStats() , which in turn calls
drawGraph() . Alternatively, we can simply create a new method that calls the
getStats(),
drawGraph() and
drawParticles () for us directly, then have just one event listener that's bound to the new method. The second option is more expensive however, so we'll stick with the first.
// this line is added before the end of the <code> drawParticles </code>() method getStats(); // this line is added before the end of the <code> getStats() </code> method drawGraph();
We also remove the event parameter (which holds the
Event object) from both the
drawGraph() and
getStats() , as they are no longer needed.
Step 18: Disable All Mouse Events for Display Objects That Do Not Need It
Since this Flash animation does not require any user interaction, we can free its display object from dispatching unnecessary mouse events. In the
Flames class, we do that by setting its
mouseEnabled property to
false. We also do the same for all its children by setting the
mouseChildren property to
false. The following lines are added to the
Flames constructor:
mouseEnabled = false; mouseChildren = false;
Step 19: Use the
Graphics.drawPath() Method to Draw Complex Shapes
The
Graphics.drawPath() is optimized for performance when drawing complex paths with many lines or curves. In the
Flames.drawGraph() method, the CPU load and memory consumption graph lines are both drawn using a combination of
Graphics.moveTo() and
Graphics.lineTo() methods.
for( var i = 0; i < memoryLogLength; i++ ){ ypoint = ( memoryLog[i] - memoryMin ) / memoryRange * graphHeight; xpoint = (i / sampleSize) * graphWidth; if( i == 0 ){ graphMC.graphics.moveTo( xpoint, -ypoint ); continue; } graphMC.graphics.lineTo( xpoint, -ypoint ); }
We replace the original drawing methods with calls to
Graphics.drawPath(). An added advantage of the revised code below is that we also get to remove the drawing commands from the loops.
var commands:Vector.<int> = new Vector.<int>(); var data:Vector.<Number> = new Vector.<Number>(); for( var i = 0; i < memoryLogLength; i++ ){ ypoint = ( memoryLog[i] - memoryMin ) / memoryRange * graphHeight; xpoint = (i / sampleSize) * graphWidth; if( i == 0 ){ data[ data.length ] = xpoint; data[ data.length ] = -ypoint; commands[ commands.length ] = 1; } data[ data.length ] = xpoint; data[ data.length ] = -ypoint; commands[ commands.length ] = 2; } graphMC.graphics.drawPath( commands, data );
Step 20: Make the Classes Final
The
final attribute specifies that a method cannot be overridden or that a class cannot be extended. It can also make a class run faster, so we'll make both the
Flames and
Particle classes final.
Edit: Reader Moko pointed us to this great article by Jackson Dunstan, which remarks that the
final keyword does not actually have any effect on performance.
The CPU load is now 33.3%, while the total memory used stays between 4.8 and 5MB. We've come a long way from the CPU load of 41.7% and peak memory size of 9MB!
Which brings us to one of the most important decisions to be made in an optimization process: knowing when to stop. If you stop too early, your game or application may perform poorly on low end systems, and if you go too far, your code may get more obfuscated and harder to maintain. With this particular application, the animation looks smooth and fluid while CPU and memory usage are under control, so we'll stop here.
Summary
We have just looked at the process of optimization, using the Flames class as an example. While the many optimization tips were presented in a step by step fashion, the order doesn't really matter. What's important is being aware of the many issues that can slow down our program, and taking measures to correct them.
But remember to watch out for premature optimization; focus first on building your program and making it work, then start tweaking performance.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this postPowered by
| http://code.tutsplus.com/tutorials/actionscript-3-0-optimization-a-practical-example--active-11295 | CC-MAIN-2016-07 | refinedweb | 5,234 | 57.37 |
*Sits quitely, pulls out gun, BLAM!*
But thanks anyways.
*Sits quitely, pulls out gun, BLAM!*
But thanks anyways.
Uh, this code is from the Sam's Teach Yourself C Programming in 21 days and it never stops, what exactly is wrong with it?
#include <stdio.h>
int count;
int main(void)
{
I think the world would have been better without 911 seeing as how all of america is on orange alert (right before war)
Do you have MSN "The One"(great movie)?
Thats great I also check it often and my e-mail. Do you have irc for free or did you pay for it?
Welcome aboard bedop, what messangers do you have?
P.S.
My e-mail: caphack@hotmail.com
My MSN: caphack@hotmail.com
My Aim: caphack1001
No Icq
Working on irc
What are the principles of least privilage? And in every program I have seen they all use global variables unless of course the variables are in a function...why do so many books have that and what...
Hey datainjector I got the same error all that is wrong is that there is a period on the end of the e-mail address, delte that and it works. :D
Also how where is everyone as for C knowledge, like...
A: Sure why not
Q: How many books do you have in you own "library" and how many did you pay for?
I would like to join your group blue and I have already sent you an e-mail. BTW this my e-mail addy: caphack@hotmail.com
P.S. I understand what you mean hobo and trust me, this board is the...
Ok I am going to try a couple of different ways of scanning so that I can check on the input and see how everything works but thanks for all your help :)
Hobo:
I tried to put them all in one scanf but I think I had commas between the %f's...would that be a problem? I changed the f to a double just to see if that would fix the 'type' error.
...
Once again this is another one of Sams quizes, first here is the question: 10. Write a program that uses a function to find the average of five type float values entered by the user.
Now this is...
In the library eh...guess you have never seen our libraries selection of computer books let alone computer programming books :(
That would be what I am using, and it did give me errors, I just got the 3 different programs mixed up :o, sorry again, like I said I will just try what you posted and go from there, I don't like...
Well I will try to fix it, then if it is still weird(most likely ;)) I will probably try your "later lesson" way, thanks Prelude :)
P.S.
Sorry about that, I have yet another program that...er...
This is question #8 from chapter 5 in Sams teach yourself c programming in 21 days:
8. Write a function that receives two numbers as arguments. The function
should divide the first number by the...
A: A big salary or the opportunity to work at the place of my dreams.
Q: What is your dream job?
Ok then, so the whole long x, x_cubed and so on and so on is pretty much like a frame that defines how the function works, but when you put input it, input takes over and just runs it course...ok...
I dunno if someone wants to take the time but I would be glad, I got this code from Sams Teach Yourself C Programming in 21 Days:
#include <stdio.h>
long cube(long x);
long input,answer;
A: T.V. Show event...probably nothing, all of these new t.v. series suck IMO
Q: Why do people assume that all people who wear glasses :cool: are smart and watch either Start Trek, Star Wars, or...
A: I am :) I own an xbox and intend to spread it throughout the world.
Q: What do you all have against the xbox and what makes you think that "gaycube" is so good??
A: Post whatever you want here.
Q: Would anyone here be offened if I posted an xbox comic when I get it from my friend...
A: How should I know...I have yet to begin working :)
Q: How many people here are their own bosses?
A: Because of farmers gasses.
Q: This is a question and an answer, how many of you find this funny:
P.S. This is meant to be towards ober's post and the... | https://cboard.cprogramming.com/search.php?s=798342dae941e64872072e2ac023144b&searchid=3604221 | CC-MAIN-2020-16 | refinedweb | 771 | 90.09 |
Apart from the usual way of creating an event in FOSSASIA’s Orga Server project by using POST requests in Events API, another way of creating events is importing a zip file which is an archive of multiple JSON files. This way you can create a large event like FOSSASIA with lots of data related to sessions, speakers, microlocations, sponsors just by uploading JSON files to the system. Sample JSON file can be found in the open-event project of FOSSASIA. The basic workflow of importing an event and how it works is as follows:
- First step is similar to uploading files to the server. We need to send a POST request with a multipart form data with the zipped archive containing the JSON files.
- The POST request starts a celery task to start importing data from JSON files and storing them in the database.
- The celery task URL is returned as a response to the POST request. You can use this celery task for polling purposes to get the status. If the status is FAILURE, we get the error text along with it. If status is SUCCESS we get the resulting event data
- In the celery task, each JSON file is read separately and the data is stored in the db with the proper relations.
- Sending a GET request to the above mentioned celery task, after the task has been completed returns the event id along with the event URL.
Let’s see how each of these points work in the background.
Uploading ZIP containing JSON Files
For uploading a zip archive instead of sending a JSON data in the POST request we send a multipart form data. The multipart/form-data format of sending data allows an entire file to be sent as a data in the POST request along with the relevant file informations. One can know about various form content types here .
An example cURL request looks something like this:
curl -H "Authorization: JWT <access token>" -X POST -F '[email protected]'
The above cURL request uploads a file event1.zip from your current directory with the key as ‘file’ to the endpoint /v1/events/import/json. The user uploading the feels needs to have a JWT authentication key or in other words be logged in to the system as it is necessary to create an event.
@import_routes.route('/events/import/<string:source_type>', methods=['POST']) @jwt_required() def import_event(source_type): if source_type == 'json': file_path = get_file_from_request(['zip']) else: file_path = None abort(404) from helpers.tasks import import_event_task task = import_event_task.delay(email=current_identity.email, file=file_path, source_type=source_type, creator_id=current_identity.id) # create import job create_import_job(task.id) # if testing if current_app.config.get('CELERY_ALWAYS_EAGER'): TASK_RESULTS[task.id] = { 'result': task.get(), 'state': task.state } return jsonify( task_url=url_for('tasks.celery_task', task_id=task.id) )
After the request is received we check if a file exists in the key ‘file’ of the form-data. If it is there, we save the file and get the path to the saved file. Then we send this path over to the celery task and run the task with the .delay() function of celery. After the celery task is started, the corresponding data about the import job is stored in the database for future debugging and logging purposes. After this we return the task url for the celery task that we started.
Celery Task to Import Data
Just like exporting of event, importing is also a time consuming task and we don’t want other application requests to be paused because of this task. Hence, we use a celery queue to execute this task. Whenever an import task is started, it is added to the celery queue. When it comes to the front of the queue it is executed.
For importing, we have created a celery task, import.event which calls the import_event_task_base() function that uses the import helper functions to get the data from JSON files imported and saved in the DB. After the task is completed, we update the import job data in the table with the status as either SUCCESS or FAILURE depending on the outcome of the celery task.
As a result of the celery task, the newly created event’s id and the frontend link from where we can visit the url is returned. This along with the status of the celery task is returned as the response for a GET request on the celery task. If the celery task fails, then the state is changed to FAILURE and the error which the celery faced is returned as the error message in the result key. We also print an error traceback in the celery worker.
@celery.task(base=RequestContextTask, name='import.event', bind=True, throws=(BaseError,)) def import_event_task(self, file, source_type, creator_id): """Import Event Task""" task_id = self.request.id.__str__() # str(async result) try: result = import_event_task_base(self, file, source_type, creator_id) update_import_job(task_id, result['id'], 'SUCCESS') # return item except BaseError as e: print(traceback.format_exc()) update_import_job(task_id, e.message, e.status if hasattr(e, 'status') else 'failure') result = {'__error': True, 'result': e.to_dict()} except Exception as e: print(traceback.format_exc()) update_import_job(task_id, e.message, e.status if hasattr(e, 'status') else 'failure') result = {'__error': True, 'result': ServerError().to_dict()} # send email send_import_mail(task_id, result) # return result return result
Save Data from JSON
In import helpers, we have the functions which perform the main task of reading the JSON files, creating sqlalchemy model objects from them and saving them in the database. There are few global dictionaries which help maintain the order in which the files are to be imported and saved and also the file vs model mapping. The first JSON file to be imported is the event JSON file. Since all the other tables to be imported are related to the event table so first we read the event JSON file. After that the order in which the files are read is as follows:
- SocialLink
- CustomForms
- Microlocation
- Sponsor
- Speaker
- Track
- SessionType
- Session
This order helps maintain the foreign constraints. For importing data from these files we use the function create_service_from_json(). It sorts the elements in the data list based on the key “id”. It then loops over all the elements or dictionaries contained in the data list. In each iteration delete the unnecessary key-value pairs from the dictionary. Then set the event_id for that element to the id of the newly created event from import instead of the old id present in the data. After all this is done, create a model object based on the mapping with the filename with the dict data. Then save that model data into the database.
def create_service_from_json(task_handle, data, srv, event_id, service_ids=None): """ Given :data as json, create the service on server :service_ids are the mapping of ids of already created services. Used for mapping old ids to new """ if service_ids is None: service_ids = {} global CUR_ID # sort by id data.sort(key=lambda k: k['id']) ids = {} ct = 0 total = len(data) # start creating for obj in data: # update status ct += 1 update_state(task_handle, 'Importing %s (%d/%d)' % (srv[0], ct, total)) # trim id field old_id, obj = _trim_id(obj) CUR_ID = old_id # delete not needed fields obj = _delete_fields(srv, obj) # related obj = _fix_related_fields(srv, obj, service_ids) obj['event_id'] = event_id # create object new_obj = srv[1](**obj) db.session.add(new_obj) db.session.commit() ids[old_id] = new_obj.id # add uploads to queue _upload_media_queue(srv, new_obj) return ids
After the data has been saved, the next thing to do is upload all the media files to the server. This we do using the _upload_media_queue() function. It takes paths to upload the files to from the storage.py helper file for APIs. Then it uploads the files using the various helper functions to the static data storage services like AWS S3, Google storage, etc.
Other than this, the import helpers also contains the function to create an import job that keeps a record of all the imports along with the task url and the user id of the user who started the importing task. It also stores the status of the task. Then there is the get_file_from_request() function which saves the file that is uploaded through the POST request and returns the path to that file.
Get Response about Event Imported
The POST request returns a task url of the form /v1/tasks/ebe07632-392b-4ae9-8501-87ac27258ce5. To get the final result, you need to keep polling this URL. To know more about polling read my previous blog about exporting event or visit this link. So when the task is completed you would get a “result” key along with the status. The state can either be SUCCESS or FAILURE. If it is a FAILURE you will get a corresponding error message due to which the celery task failed. If it is a success then you get data related to the corresponding event that was created because of import. The data returned are the event id, event name and the event url which you can use to visit the event from the frontend. This data is also sent to the user as an email and notification.
An example response looks something like this:
{ “result”: { “event_name” : “FOSSASIA 2016”, “id” : “24”, “url” : “ }, “state” : “SUCCESS” }
The corresponding event name and the url is also sent to the user who started the import task. From the frontend, one can use the object value of the result to show the name of the event that is imported along with providing the event url. Since the id and identifier are both present in the result returned one can also make use of them to send GET, PATCH and other API requests to the events/ endpoint and get the corresponding relationship urls from it to query the other APIs. Thus, the entire data that is imported gets available to the frontend as well.
Reference Links:
- Read about the form data types:
- Read more about the various task APIs related to celery:
- Read more about AJAX polling:
- More about the various import helper functions: | https://blog.fossasia.org/create-event-by-importing-json-files-in-open-event-server/ | CC-MAIN-2022-21 | refinedweb | 1,666 | 61.97 |
Accelerometer Shield for Physics Class and Beyond
Introduction: Accelerometer Shield for Physics Class and Beyond
During a physics class we were preforming a physics experiment to measure the accleration due to graivty - 9.8m/s^2 and while we did not make any new physics discoveries I had an idea for improving the experiment. The way the experiment worked was to drop a NXT with an attached acclerometer off of a balcony and some one below would catch it. The problem was that if they missed it $200 would hit the ground and shatter. After some research I found that for $50 you can build an Arduino with an accelerometer that works just as well and has less signal noise.
A few benefits to the MMA7361 Acceleromter are that it can be set to detect +/- 1.5g or +/-6g. I also have a very low power consumption and its small size even with the breakout board makes it an ideal fit for any project. Not to mention the relatively low cost.
Step 1: Materials
1x Arduino Uno (a Mega will allow you to store more data, but it is also bigger and more expensive)
1x MMA7361 Acceleromter - Sparkfun has one with a breakout board
1x Proto shield/perf board
The every project extras:
Solder
Headers
Wire
LEDs
Push buttons
Resistors
On/Off switch
etc.
Step 2:
Depending on whether you are using a Proto board or perf board solder all the wires from the breakout board to the appropriate pin on the Arduino.
Sleep - 13
SelfTest/STP - 12
ZeroG/0G - 11
Gselect/GSel - 10
X axis - A0
Y axis - A1
Z axis - A2
VDD - 3.3 V
Next attach 2 buttons and 2 more LEDs on 4 other pins
Step 3: Programming
There are only a few variables that you need to change based on what you want to record. The code is relatively commented which I hope will help you. Below is the Readme fie in the zip file.
You will also need to add the accelerometer library. If you have not added a library before create a folder called "libraries" in your Arduino sketch folder and then drop the accelerometer folder inside of the attached libraries folder into your new "libraries" folder.
total_points - controls the number of recorded data points
const unsigned long loop_time - controls how often data is recorded ex. 5 corresponds to 5ms.
In order for the program to properly work the AcceleroMMA7361 library must be added to your Arduino library folder and the program restarted.
The maximum number of data points that can be collected on the Arduino SRAM is 700. Additional data points may be collected with a SD card or additional storage.
The scaling factor for the values found "x" is: ((x/100)-1)/acceleration due to gravity). Acceleration to gravity roughly equals 9.8 . The scaled values are in units of m/s^2.
To change how often data points are collected open the program file and change the constant integer loop_time to your desired time. This number is in milliseconds.
The LEDs correspond to each of the buttons and the following action. The button farthest from the accelerometer turns the green LED on and records values. The LED will turn off when pressed again and values will stop being collected. The red LED corresponds to the button closest to the accelerometer and lights up when values are being transmitted to the serial monitor.
The red LED will flash twice at the beginning to indicate that the start up loop and calibration has completed and data can now now be collected.
The most basic code for this to make sure everything is working is:
When rested flat on something the Z axis should read roughly 1 and the X and Y axis 0 each.
}
Step 4: Upload and Use
Now just open the Arduino program upload attached and open the serial monitor to see the data printed. Keep in mind that you have to subtract 1 or 100 * 10^-2 from the results because that is the constant force of gravity. That way when it is in free fall it will read -1 which * -9.8 = -9.8m/s^2 - the constant acceleration from gravity.
There are also several programs that came with the library which you can test and out modify yourself. I hope you enjoy this and please let me know your results!
Is it possible to use more than one accelerometers at the same time. I am planning doing a project that has some strings hanging, I want to attach the sensors to the strings and meassure the acceleration when they move. But I haven't figured out how to do it.
Nice project!
Hint: you can find the MMA7361 and other sensors for cheap on this website:...
They ship from Canada.
Hey
when I load the rawdata sketch it doesn't upload, it says that in
AcceleroMMA7361 accelero;
'accelero' is invalid
can you help me out?
thanks
Have you loaded the library?
[code]
#include <AcceleroMMA7361.h> [/code][/code]
[code] AcceleroMMA7361 accelero;
Please let me know if the problem persists.
yes Those lines were included in my sketch
does it give you a line# or any more information for the error?
AcceleroMMA7361.cpp:29: error: 'AcceleroMMA7361' has not been declared
Hey Hammock Boy,
Thanks for this tutorial with the MMA7361 Accelerometer from SparkFun. I've order this guy 2 weeks ago and now it's just dusting around under my bed since I can't get it start working :(
I've used the library code example that you are using and nothing worked. Do I have to use some additional resistors when connecting the Accelerometer to my Arduino Uno? Please help. I'll be very grateful to you.
P.S. It'l be great if you would've made a tutorial on how to connect and use the MMA7361 Accelerometer.
You should not need any resistors. Here is a link that I found very helpful
Please let me know if you have any more questions and I will be glad to help.
Hey, Thanks for the link. I've tried to do everything it said in the Instructions and I finally got something working! After the Calibrating step my Serial Monitor showed x=0 and whenever I move the x axis the number changes in the range of about from -400 to 400. (Is it supposed to go from -100 to 100?) I knew my mistake when i corrected accelero.setARefVoltage(3.3); to accelero.setARefVoltage(5);
The y and z axes however don't respond to the tilts and show numbers way off from 0. (Around 30 for the y and -99 on z axis)
Dis you have the same problems? Or is it me that is considering something wrong?
Oh no! Sorry man, It was my bad! The x and y axis work perfect!! :D I just mixed up with y and z axis. By the way, the reading are from -100 to 100 on both x and y axis.
By the way, in the G-Force example sketch, Does 100 represent 1g? OR not?
I'm not completely sure about the z axis if it operates completely well or not, but if it doesn't I could live with it!! I have 2 axis ready to roll anyway! :D That will last for some projects in my mind.
Thanks for your help Hammock! I really appreciate it! If I would have any questions I hope you wouldn't be against it, since you are the first person that i've actually received feedback regarding the accelerometer. Peace man!
100 = 1g
To convert to m/s^2 you have to subtract out gravity. So for example if you were trying to determine the acceleration due to free fall you would take the magnitude of the 3 axes [ (x^2+y^2+z^2)^.5 -1 (gravity)] so at rest it should read (0^2+0^2+1^2)^.5 - 1 = 0g * 9.8m/s^2 = 0 m/s^2 (of course nothing is moving its at rest so the acceleration is 0).
Assuming that the accelerometer is upright and not moving on a table then x and y should = 0 and z should = 100^-2 = 1 because that is the constant force from gravity. When it is in free fall ignoring rotations the board will make z will = 0.
Try changing the speed of the loop - delay(500) - to something like delay(50). Then shake it around and see if the serial monitor displays some results for x,y and z other than 1 and 0. It may be an issue of not reading results until after the change in motion happens. +/- 400 would mean forces of 4g (100 = 1g) which seems to high. The board can be set to read +/-1.5 g or +/-6g, though.
Can you take a picture of the set up? Normally I would think numbers in the range of +/- 100 make sense.
this is a great idea, thanks for sharing! are you in high school or college level physics? I recently got my physics department on the arduino bandwagon too.
I am in high school AP Physics and still trying to convince my teacher to switch all of his labs/experiments from using NXT to Arduino!
Are you in college or high school?
just graduated from college
Congratulations! What did you major in?
physics! | http://www.instructables.com/id/Accelerometer-Shield-for-Physics-Class-and-beyond/ | CC-MAIN-2017-51 | refinedweb | 1,573 | 72.87 |
Unit Testing Frameworks in C#: Comparing XUnit, NUnit, and Visual Studio
In this post, we'll look into a variety of popular C# unit testing frameworks, comparing and contrasting how they work with examples.
Join the DZone community and get the full member experience.Join For Free
When you find yourself (or your company) with more code than anyone could ever test by hand, what can you do? Well, unit testing has always been the perfect solution, as you can run tests that check more data than a person could in a day in a matter of milliseconds. to no human involvement beyond once the tests are written.
Not only that, but using code to test code will often result in you noticing flaws with your program that would have been very difficult to spot from a programmer’s viewpoint.
Popular C# Unit Testing Frameworks
The unit testing frameworks I’ll be testing are:
- NUnit
- XUnit
- Built-in Visual Studio testing tools
All of these unit testing frameworks offer a similar end goal: to help make writing unit tests faster, simpler, and easier! But, there are still a few key differences between them. Some are more focused towards powerful complex tests, while others rank simplicity and usability as a higher priority.
First Up Is Microsoft’s Own Built-in Visual Studio Unit Testing Tools
In most versions since 2005, Visual Studio has come with a built in testing framework supported by Microsoft. This framework certainly wins the most points for installation. Though, if your copy of Visual Studio doesn’t come with it already included, you are going to have to jump through a few hoops to get it going. (We wrote a review of the 2017 version of Visual Studio here.)
This framework is the simplest of the three and uses an easy to understand method attribute structure (much like most testing frameworks) where you are able to add tags such as ‘[TestClass]’ and ‘[TestMethod]’ to your code in order to get testing.
Visual Studio even has a UI panel dedicated to visualizing your tests, which can be found under Test -> Windows -> Test Explorer.
Now, before we dive into trying out this testing framework, let’s introduce our example classes that need testing.
First, we have a Raygun, which we can fire and recharge. The only thing we need to keep track of with our Raygun is its ammo, which can run out.
We also have a bug, which we can shoot at with our Raygun. But this bug has the ability to dodge our attempts to shoot it.
If we shoot at a bug after it has just dodged, we will miss. Though, if we hit the bug square on, it’s safe to assume that it will be dead.
These two classes are defined as follows:
public class Raygun { private int ammo = 3; public void FireAt(Bug bug) { if (HasAmmo()) { if (bug.IsDodging()) { bug.Miss(); } else { bug.Hit(); } ammo--; } } public void Recharge() { ammo = 3; } public bool HasAmmo() { return ammo > 0; } }
public class Bug { private bool dodging; private bool dead; public void Dodge() { dodging = true; } public void Hit() { dead = true; } public void Miss() { dodging = false; } public bool IsDodging() { return dodging; } public bool IsDead() { return dead; } }
Seems simple enough, but we need to make sure that our Rayguns and bugs behave as we want them to.
So then, it’s time to write some unit tests! (We wrote about how to write robust unit tests in C# here.)
First up, let’s try a simple situation where we want to shoot at and hit a bug.
What we would expect is that afterward the bug will be dead, and the Raygun will still have a bit of juice left in it.
Well, let's see if we are right:
[TestClass] public class Class1 { [TestMethod] public void TryShootBug() { Bug bug = new Bug(); Raygun gun = new Raygun(); gun.FireAt(bug); Assert.IsTrue(bug.IsDead()); Assert.IsTrue(gun.HasAmmo()); } }
These tags are what allow Visual Studio’s built-in testing framework to recognize this particular class as a class that contains unit tests, and to treat the method TryShootBug() as a test case, instead of just an ordinary method.
Since these tools are built for Visual Studio, running your tests from within Visual Studio is very simple.
Just right click on any [TestMethod] tags as shown:
And would you look at that, the test passed. Looks like our Raygun can at least hit a stationary bug.
Of course, this is only showing the bare basics of what Visual Studio’s testing tools can do.
Some other very useful tags you will surely be using are the [TestInitialize] and [TestCleanup] tags.
These tags allow you to specify code that is run before (initialize) and after (cleanup) every individual test is run.
So, if you want to reload your Raygun after every encounter like a stylish gunslinger, then this should do the trick:
[TestInitialize] public void Initialize() { gun = new Raygun(); } [TestCleanup] public void Cleanup() { gun.Recharge(); } <i> </i>
Stylish.
While we are still talking about the Visual Studio testing tools I’ll quickly mention the [ExpectedException] tag, which is incredibly useful for when you want to deliberately cause an exception in your test (which you will certainly want to do at some point to make sure your program isn’t accepting data it shouldn’t).
Here’s a quick example of how you would write a test that results in an exception:
[TestMethod] [ExpectedException(typeof(System.IndexOutOfRangeException))] public void TryMakingHeapsOfGuns() { Raygun[] guns = new Raygun[5]; Bug bug = new Bug(); guns[5].FireAt(bug); }
Overall, the built-in Visual Studio testing tools do exactly what they say on the box. They are simple, easy to use, and handle all the basic testing functionality you would need. Plus, if you’re already working in Visual Studio, then they are already ready to use!
Next Up Is Arguably the Most Popular C# Testing Platform, NUnit
NUnit is an incredibly widely used tool for testing, and it serves as an excellent example of the open source unit testing frameworks. It’s a broad and powerful testing solution. In fact, it’s what we use here at Raygun for the bulk of our unit testing.
NUnit is installed via a NuGet package, which you can search for within Visual Studio.
The packages I’ve used for this example are NUnit and NUnit.ConsoleRunner, though you also have the option of installing a GUI-based plugin for Visual Studio.
NUnit uses a very similar attribute style system just like the visual studio testing tools, but now we will be referring to a [TestClass] as a [TestFixture], and a [TestMethod] as simply a [Test].
Now, let’s go back to our Rayguns and bugs and have a look at another example, but this time using NUnit.
Now in order to run this test using NUnit, we need to seek the command line (unless of course you’ve chosen to install a GUI based plugin.)
This time, let's make sure our dodges and ammo are working properly, so let's try and shoot a much more mobile bug:
[TestFixture] public class NUnitTests { [Test] public void TryShootDodgingBug() { Bug bug = new Bug(); Raygun gun = new Raygun(); bug.Dodge(); gun.FireAt(bug); bug.Dodge(); gun.FireAt(bug); bug.Dodge(); gun.FireAt(bug); Assert.IsFalse(bug.IsDead()); Assert.IsFalse(gun.HasAmmo()); } }
Notice the new [TestFixture] and [Test] tags.
First, you must make sure you are in your project’s root directory (e.g. C:\Users\yourUserName\Documents\Visual Studio 2015\Projects\YourProjectName) and then enter the following command in a new cmd window:
packages\NUnit.ConsoleRunner.3.6.0\tools\nunit3-console.exe
YourProjectName\bin\Debug\YourProjectName.dll
Assuming everything is set up properly, the NUnit console runner will run all the tests in your project and give you a nice little report on how things went:
Looks like our bug sure can dodge and our Raygun can certainly run out of ammo!
One feature of NUnit that makes it incredibly useful is the ability to include parameters in your tests!
This means that you can write a test case with arguments, then easily run the same test with a range of unique data. This removes the need to write unique test cases for every set of arguments you want to test.
Here’s a quick example test case we could use to make sure our Raygun was actually running out of ammo at the right time, in a much smarter way than before:
[TestCase(1)] [TestCase(2)] [TestCase(3)] [TestCase(4)] public void FireMultipleTimes(int fireCount) { Bug bug = new Bug(); Raygun gun = new Raygun(); for(int i = 0; i < fireCount; i++) { gun.FireAt(bug); } if (fireCount >= 3) { Assert.IsFalse(gun.HasAmmo()); } else { Assert.IsTrue(gun.HasAmmo()); } }
Excellent, with this one test case we were able to make sure a Raygun which has fired two shots still has ammo, while one that has fired three is empty. And thanks to the [TestCase] tag we were easily able to test a whole bunch of other values while we were at it!
Overall, NUnit is an excellent testing framework, and as you delve deeper into what it can offer, it surely exceeds what Microsoft’s built-in testing can offer.
Anyway, let’s look at our last testing framework, and our last attempt at shooting bugs with Rayguns!
If You Like the Sound of Facts and Theories, Then It's Time to Look at XUnit
XUnit is an open-source testing platform with a larger focus in extensibility and flexibility. XUnit follows a more community-minded development structure and focuses on being easy to expand upon.
XUnit actually refers to a grouping of frameworks, but we will be focusing on the C# version. Other versions include JUnit, a very well known testing framework for Java. XUnit also uses a more modern and unique style of testing, by doing away with the standard [test] [testfixture] terminology and using new fancy tags like Facts and Theories.
NUnit and XUnit are actually quite similar in many ways, as NUnit serves as a basis for a lot of the new features XUnit brings forward.
Note that XUnit is also installed via a NuGet package much like NUnit, which you can search for within Visual Studio. The packages I’ve used for this example are XUnit and XUnit.ConsoleRunner, though you also have the option of installing a GUI-based plugin for Visual Studio.
Much like the [TestCase] tag in NUnit, XUnit has its own solution to providing parameters to a test case. To do so, we will be using the new [InLineData] tag and Theories.
In general, a test case that has no parameters (so it doesn’t rely on any changing data) is referred to as a Fact in XUnit, meaning that it will always execute the same (so 'Fact' suits it pretty well). On the other hand, we have Theories, which refer to a test case that can take data directly from [InLineData] tags or even from an Excel spreadsheet.
So, with all these new fancy keywords in mind, let’s write a test in XUnit that uses a theory to test our bugs dodge ability:
[Theory] [InlineData(true, false)] [InlineData(false, true)] public void TestBugDodges(bool didDodge, bool shouldBeDead) { Bug bug = new Bug(); Raygun gun = new Raygun(); if (didDodge) { bug.Dodge(); } gun.FireAt(bug); if (shouldBeDead) { Assert.True(bug.IsDead()); } else { Assert.False(bug.IsDead()); } }
This test covers both cases at once, where the bug dodges and survives, or doesn’t dodge and gets hit. Lovely!
Now, last step, let's run the XUnit test runner from the command line (note that much like NUnit, XUnit also has a GUI-based Visual Studio plugin available for you to run tests with).
First, you must make sure you are in your project’s root directory, just like NUnit (e.g. C:\Users\yourUserName\Documents\Visual Studio 2015\Projects\YourProjectName) and then enter the following command in a new cmd window:
packages\xunit.runner.console.2.1.0\tools\xunit.console.exe
YourProjectName\bin\Debug\YourProjectName.dll
Assuming everything is set up properly, the XUnit console runner will run all the tests in your project and let you know how your tests turned out.
Looks like our dodging tests passed!
Overall XUnit acts as the more contemporary version of NUnit, offering flexible and usable testing with a fresh coat of paint.
In Conclusion…
Regardless of which of the unit testing frameworks you use, you’re going to be getting all the basics. However, there are a few differences between them that I hope I’ve highlighted so you can choose the right one for your project. Whether it’s the convenience of Microsoft’s built-in unit testing framework, the solid and well-proven status of NUnit, or the modern take on unit testing that XUnit provides, there's always something out there that will give you exactly what you need!
Want to add an extra layer of protection for your code? Catch the errors that fall through the cracks with Raygun. Take a free trial here.
Published at DZone with permission of Jack Huygens, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/unit-testing-frameworks-in-c-comparing-xunit-nunit?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fwebdev | CC-MAIN-2021-43 | refinedweb | 2,210 | 59.33 |
Writing Cool Games for Cellular Devices
One of the main usages of J2ME is game programming. In this
article, I'll explain this by way of a standalone game that I
developed. This application is a simple basketball game, in which
the player plays against a computer opponent. Each game lasts
exactly two minutes. The aim of the game is to shoot as many baskets
as you can and to prevent the computer opponent from doing the
same.
Main Functions
A typical game in J2ME environment might consist of the
following:
- A welcome screen that redirects automatically to the main
- The main menu.
- The main screen. This is where the game is actually
played.
- Level menu (set the game difficulty, etc.).
- Instruction screen.
- About screen.
The main screen and the welcome screen are executed by classes
that are extends from the "">
Canvas. This class is defined as being in the
low-level API, meaning that it allows full control of the
display at the pixel level. All the other screens are executed by
high-level API classes that use J2ME standard controls.
When we program apps for J2ME, we must also consider how to
handle unique system events such as call interrupts. When we deal
with this event, we might want to freeze the current game state or
even be able to save this game state after we exit the
application.
In order to get started, we'll have to download the
"">J2ME
wireless toolkit. This is the most elementary SDK for this
environment. The main objective of this toolkit is to compile the
Java source files and to produce two deployment files: a JAR file
that encapsulate all of the class files and a JAD (Java Application
Descriptor) file. The JAD file provides data about the application
such as vendor, JAR size, version number, etc. This SDK can be
integrated with more advanced IDEs such as "">Eclipse or "">NetBeans. You can learn more about
getting started with the wireless toolkit in the article " "">
Wireless Development Tutorial Part I."
Game Structure
Figure 1 shows the class structure of the game.
Figure 1. General structure of the game
The game starts with an opening screen, and after five seconds
redirects to a menu screen. From the menu screen, the player
has the option to start the game, set the level, display an "about"
screen, or display the instructions. Each screen is managed by its
own class. For example, the welcome screen is managed by the
Splash class, the main game screen is managed by
MainScreen, and so forth.
The
Midlet Class
A very important class that is not displayed in the diagram
above is the
TestMidletMIDlet class. This class
extends from the
Midlet class, and its job is to
initialize all of the other classes and controls. One major method of
the
Midlet class is the
startApp()
method. Normally, when a midlet is called, it is
initially in a paused state. If everything runs normally and
no exception occurs, then the midlet enters the active
state. This is done by calling the
startApp() method.
Here is an example:
public void startApp() { // some code here... display.setCurrent(splash); }
For this stage, the most important call is
display.setCurrent(splash). This method call sets the
midlet's
Displayable; in other words, it determines which of
our screens is to be displayed on the device. In this example it
will be the
Splash screen, as seen in Figure 2.
Figure 2. Splash screen
Splash Screen
The splash screen is the first screen that appears when the
application starts up. It is extends the "">
Canvas class. The
Canvas class, as the
so-called low-level API, allows full control of the display at the
pixel level. In order to paint something, we have to override the
paint() method. For example:
public void paint(Graphics g) { // sets background color g.setColor(255, 255, 255); // fill the screen with the color //defined previously g.fillRect(0, 0, this.getWidth(), this.getHeight()); // draw some image g.drawImage(screenshot, (this.getWidth() - 128) / 2, (this.getHeight() - 128) / 2, Graphics.TOP|Graphics.LEFT); }
After a period of five seconds, or by pressing any key, our game
changes the display to the main menu screen. In order to handle key
presses we have to implement the "">
CommandListener interface. All key press
events will be handled by the function
commandAction().
public void commandAction(Command command, Displayable displayable) { // stops timer operation timer.cancel(); /* switch the display to menu. parent is the Midlet object which actually does the job. */ parent.setCurrent("MainMenu2"); }
"">
andand
Timer
TimerTaskare two classes that
allow us to execute a function after a period of predefined time.
We use these classes to automatically switch the display to the
next screen.
public void startTimer() { // TimerTask defines the task or subroutine // to be executed. TimerTask gotoMenu = new TimerTask() { public void run() { timer.cancel(); parent.setCurrent("MainMenu2"); } }; // Timer class creates and manages threads on // which the tasks are executed timer = new Timer(); timer.schedule(gotoMenu, 5000); }
The
MainMenu Class
The
MainMenu class, shown in Figure 3, shows the
main menu of the game.
Figure 3. Main menu screen
The main menu has five elements:
- Continue: Continues a game that has been previously
stopped.
- Start Game: Starts a new game.
- Choose Level: Choose the speed of the computer player.
- Instructions: Shows an instructions screen.
- About: Shows an about screen.
In J2ME, the class "">
is responsible for displaying lists or menus, whichis responsible for displaying lists or menus, which
List
is why
MainMenusubclasses it. As in the
Splashclass, we have to implement the "">
CommandListenerinterface in order to handle key
presses.
The
MainScreen Class
MainScreen, shown in Figure 4, is the class that displays
the actual basketball game being played. The
MainScreenclass is extended from the
Canvasclass.
Figure 4. Main game screen
Rules of the Game
As mentioned before, this is a standalone game in which the
player plays against the computer. There are two players on the
screen. The human player controls the blue player, who tries to put
the ball in the basket on the right. The computer controls the red
player. Each game lasts two minutes, and the player who shoots the
most baskets wins. The human player controls the red player by the
arrow keys or by the number keys:
2 for up,
8 for down,
6 for right,
4 for left, and
5 to shoot the ball.
Elements in the Main Screen
Several graphic elements compose this screen. These include the
background, the players, the ball, etc. Each element must be a PNG
graphic file; these are illustrated in Figures 5 through 8.
Figure 5. Screen background
Figure 6. Human player icon
Figure 7. Computer player icon
Figure 8. Ball icon
When the class
MainScreen initializes, we must get
each graphic element from the JAR file and store it in
an
Image field. This can be done in the
constructor.
private Image screenShot; public MainScreen(TestMidletMIDlet parent) { // some code... try { screenShot = Image.createImage ("/Images/screenShot.png"); } catch(IOException e) {} // we do this for all the other images }
Handling Key Presses
As in previous classes, we again implement the
CommandListener interface. Here, we only use
CommandListener to pause the game and return to the main
menu. In order to do this, we have to initialize a "">
object first. In our example we label thisobject first. In our example we label this
Command
object as
Pause. The label can be seen in Figure 4 on the bottom
left side of the screen.
// this is the Command object, labeled as "Pause" private Command pauseCommand = new Command("Pause", Command.STOP, 1);
All of the
CommandListener events are handled by
commandAction().
/** * Handles command actions */ public void commandAction (Command c, Displayable d) { // verified that the pause button was pressed if (c == pauseCommand) { // do something } }
In the
Canvas class, we can also handle key presses
with the
keyPressed(int keyCode) and
keyReleased(int keyCode) methods. Every key press
invokes
keyPressed(), and every key release invokes
keyReleased(). In our game, we use these functions to
handle all of the arrow and select keys. These are the buttons that
move the human's player icon.
private keyStatus = 0; /** * Called when a key is pressed. */ protected void keyPressed(int keyCode) { if (this.getGameAction(keyCode) == UP) { keyStatus = KEY_NUM2; } else if(this.getGameAction(keyCode) == DOWN){ keyStatus = KEY_NUM8; } else if(this.getGameAction(keyCode) == LEFT){ keyStatus = KEY_NUM4; } else if(this.getGameAction(keyCode) == RIGHT){ keyStatus = KEY_NUM6; } else if(this.getGameAction(keyCode) == FIRE){ keyStatus = KEY_NUM5; } else { keyStatus = keyCode; } } /** * Called when a key is released. */ protected void keyReleased(int keyCode) { keyStatus = 0; }
In general, this method updates the value of the field
keyStatus. If no key is being pressed at the moment,
then the value of
keyStatus is
0; otherwise, it is some
integer value. The actual movement of the player will be explained
later.
Algorithm of the Game
This class initializes an internal
TimerTask class.
The actual initialization is done by a
Timer, which
invokes the
TimerTask periodically (by default, every
50 milliseconds, but this is adjustable by the human player on the Level
screen). The
TimerTask class executes the function
myMove2().
private Timer timer; public void startTimer() { // this class is being executed periodically. TimerTask mover = new TimerTask() { public void run() { myMove2(); } }; timer = new Timer(); // invokes the mover class. try { // if anything is being set by the level // screen timer.schedule(mover, parent.getLevel(), parent.getLevel()); } catch (IllegalArgumentException e) { timer.schedule(mover, 50, 50); } }
The
myMove2() method has two roles:
- It checks the value of
keyStatus(which is set by the
keyPressed()and
keyReleased()methods)
and moves the human player icon accordingly.
- it moves the computer player icon according to the situation
evolving in the game.
As previously explained, each key press (on the arrow and select
keys) sets the value of the
keyStatus field. According
to that value, we calculate the coordinates of the human player
icon.
// coordinates of human player icon private int meX, meY; // coordinates of the ball private int xBall, freeBallY, yBall; // coordinates of the field's corner private int x1, x2, x3, x4, y1, y2, y3, y4; private void myMove2() { // some code // define me Coordinates switch(keyStatus) { // up case Canvas.KEY_NUM2: meY--; if (meY < y1) meY = y1; break; // down case Canvas.KEY_NUM8: meY++; if (meY > y4) meY = y4; break; // right case Canvas.KEY_NUM6: meX++; if ((x2 + x3) / 2 < meX) { meX = (x2 + x3) / 2; } if (ballOwner == 0 && compMode != 5) xBall = meX + 8; break; // left case Canvas.KEY_NUM4: meX--; if ((x1 + x4) / 2 > meX) { meX = (x1 + x4) / 2; } if (ballOwner == 0 && compMode != 5) xBall = meX + 8; break; // fire case Canvas.KEY_NUM5: if (compMode == 2) { originalY = freeBallY; compMode = 5; } break; default: ///// } // some code... }
As the game runs, the computer controls the opposing player's
icon. The CPU player's actions vary according to the situations
that evolve during the game--in different situations, the
computer behaves differently. The
int field
compMode stores a "situation code" for every given
moment. The situations are:
- Jump ball: This situation occurs only at the beginning of the
game. Both players' icons are at the middle of the field, and the
ball icon is right in between them, falling down. We switch
situation status when either player icon catches the ball.
- The human player icon has the ball: When the human player
moves, we can see the ball being dribbled with the human player.
The computer moves its player toward the human player, and tries to
steal the ball when it's close enough.
- The computer player icon has the ball: Here, the computer player
dribbles the ball. In this situation, the computer tries to move
towards the human player's basket. If the human player icon is in
front of him, it will try to bypass him. When the computer player
is near enough to the human's basket, it will try to shoot a
basket.
- Not in use.
- The human player shoots the ball: The computer switches to this
situation when the player presses the
select(or
fire, on some
devices) key. We can see the ball being thrown to the basket. If
the shot is successful, then the two players are replaced on either
side of the field and the computer switches the situation to
3
(
compMode=
3).
- The computer player shoots the ball: When the computer player has
the ball and the computer player icon is near enough to the human
player's basket, it will automatically try to shoot a basket. If the
shot is successful, the two players are replaced on either side of
the field and the computer switches the situation to
2
(
compMode=
2).
// coordinates of human player icon private int meX, meY; // coordinates of the ball private int xBall, freeBallY, yBall; // coordinates of the field's corner private int x1, x2, x3, x4, y1, y2, y3, y4; // computer player situation state private int compMode; private void myMove2() { // some code... // switch by situation switch (compMode) { /* *free ball jumps */ case 1: // this method controls the jump // movement of the ball ballJump(); // decides who gets the ball // computer gets the ball if (Math.abs(xBall - compX) <= 21 && Math.abs(freeBallY - compY) <= 5) { case3Mode = 1; compMode = 3; delay = 0; } // me kidnaps the ball if (Math.abs(meX - xBall) <= 21 && Math.abs(meY - freeBallY) <= 5) { compMode = 2; delay = 0; delay2 = 0; } // calculate comp moves if (compY > freeBallY) { compY--; } if (compY < freeBallY) { compY++; } if (compX > xBall) { compX--; } if (compX < xBall) { compX++; } compCheckBorders(); break; /* *the ball is at me player */ case 2: xBall = meX + 6; freeBallY = meY; ballOwner = 0; delay2++; ballJump(); // computer steals the ball if (Math.abs(meX - compX) <= 21 && Math.abs(meY - compY) <= 5 && delay >= 10) { case3Mode = 1; compMode = 3; delay = 0; delay2 = 0; } if (delay2 > 30) { if (Math.abs(meX + 20 - compX) > Math.abs(meY - compY)) { if (compX > meX + 20) compX--; else compX++; } else { if (compY > meY) compY--; else compY++; } // check borders for comp players compCheckBorders(); } break; /* *the ball is at computer player */ case 3: xBall = compX - 3; freeBallY = compY; ballOwner = 1; ballJump(); // me kidnaps the ball if (Math.abs(meX - compX) <= 21 && Math.abs(meY - compY) <= 5 && delay >= 5) { compMode = 2; delay = 0; delay2 = 0; } /* here we compute how the computer player icon will move */ switch (case3Mode) { // go back from player case 1: compX++; if (compX > meX + 29) { case3Mode = 2; } break; // go side from player case 2: if (compY < myHeight * 3 / 4) { compY++; compYDirection = 1; case3Mode = 3; } compY--; compYDirection = 0; case3Mode = 3; break; // continue go side case 3: if (compYDirection == 1 && compY <= meY + 15) { compY++; if (compY > y4) compY = y4; } else if (compYDirection == 0 && compY >= meY - 15) { compY--; if (compY < y1) compY = y1; } else { case3Mode = 4; } break; // go forward case 4: if (compX > myWidth * 11 / 32) { compX--; } else if (compX < myWidth * 1 / 4 - 3) { compX++; } else { originalY = freeBallY; compMode = 6; } break; // finally throw the ball case 5: case3Mode = 1; originalY = freeBallY; compMode = 6; yBall = 9; deltaX = 1; break; default: // } if (compX - meX <= 15 && Math.abs(compY - meY) <= 10) { case3Mode = 1; } // check borders for comp player compCheckBorders(); break; /* *not in use */ case 4: break; /* *me throws the ball */ case 5: freeBallY = originalY - deltaY[deltaX]; deltaX++; xBall++; // checks if the ball hits the basket if (deltaX >= 29 && (xBall >= (x2 + x3) / 2 - 10 && xBall <= (x2 + x3) / 2 + 10)) { myScore += 2; oldY = 0; compMode = 3; // reset player and the balls meX = myWidth * 3 / 16; meY = myHeight * 3 / 4; compX = myWidth * 13 / 16; compY = myHeight * 3 / 4; xBall = compX - 8; yBall = 3; deltaX = 0; } // if the ball reaches the floor else if (deltaX >= 29 && (xBall < (x2 + x3) / 2 - 10 || xBall > (x2 + x3) / 2 + 10)) { freeBallY = meY; compMode = 1; oldY = 0; deltaX = 0; } else { oldY = freeBallY; } break; /* *comp throws the ball */ case 6: freeBallY = originalY - deltaY[deltaX]; deltaX++; xBall--; // checkes if the ball hits the basket if (deltaX >= 29) { compScore += 2; oldY = 0; compMode = 2; // reset player and the balls meX = myWidth * 3 / 16; meY = myHeight * 3 / 4; compX = myWidth * 13 / 16; compY = myHeight * 3 / 4; xBall = meX + 8; yBall= 3; deltaX = 0; } else { oldY = freeBallY; } break; default: //sdfgsdfgsdgf } /* after the coordinates of the human player icon, the computer player icon and the ball has been set we can go to the last stage, which is painting the screen. */ repaint(); } /* this method controls the jump movement of the ball */ private void ballJump() { if (ballDir == 0) { yBall--; if (yBall < 3) { ballDir = 1; } } else { yBall++; if (yBall > 9) { ballDir = 0; } } } /* this function checks if the computer passed the border of the field */ private void compCheckBorders() { if (compY < y1) { compY = y1; } if (compY > y4) { compY = y4; } if (compX > (x2 + x3) / 2) { compX = (x2 + x3) / 2; } if (compX < (x1 + x4) / 2) { compX = (x1 + x4) / 2; } }
After all of the coordinates have been set, we can proceed to the
final stage, which is to actually paint the screen. This is done by
calling the
paint() method. We call this at the end of
the
move() function, by calling
repaint().
/** * paints the screen */ public void paint(Graphics g) { Graphics saved = g; // these fields show the clock in the game String clockMinuteStr = new String(); String clockSecondStr = new String(); // initialize a buffered image if (offscreen != null) { g = offscreen.getGraphics(); } // cleans the screen g.setColor(255, 255, 255); g.fillRect(0, 0, this.getWidth(), this.getHeight()); // define corners of field x1 = myWidth*3/16; x2 = myWidth*13/16; x3 = myWidth - 2; x4 = 2; y1 = myHeight * 1 / 2; y4 = myHeight - 1; // draw solid background g.setColor(255, 255, 255); g.fillRect(offsetWidth, offsetHeight, myWidth, myHeight); g.setColor(0, 0, 0); g.drawImage(screenShot, offsetWidth, offsetHeight, 0); // draw Scores g.fillRect(offsetWidth + myWidth / 4, offsetHeight + myHeight / 8, myWidth / 2, myHeight / 4); g.setColor(255, 255, 255); g.drawRect(offsetWidth + myWidth / 4, offsetHeight + myHeight / 8, myWidth / 2, myHeight / 4); clockMinuteStr = String.valueOf(clockMinute); clockSecondStr = String.valueOf(clockSecond); if (clockMinuteStr.length() == 1) clockMinuteStr = "0" + clockMinuteStr; if (clockSecondStr.length() == 1) clockSecondStr = "0" + clockSecondStr; g.drawString(":", offsetWidth + myWidth / 2, offsetHeight + 15, Graphics.TOP|Graphics.LEFT); g.drawString(clockMinuteStr, offsetWidth + myWidth / 2 - 15, offsetHeight + 17, Graphics.TOP|Graphics.LEFT); g.drawString(clockSecondStr, offsetWidth + myWidth / 2 + 5, offsetHeight + 17, Graphics.TOP|Graphics.LEFT); g.drawString(String.valueOf(myScore), offsetWidth + myWidth / 2 - 25, offsetHeight + 32, Graphics.TOP|Graphics.LEFT); g.drawString(String.valueOf(compScore), offsetWidth + myWidth / 2 + 20, offsetHeight + 32, Graphics.TOP|Graphics.LEFT); // paint player me g.drawImage(mePlayer, offsetWidth + meX, offsetHeight + meY - 19, 0); // paint Computer Player g.drawImage(compPlayer, offsetWidth + compX, offsetHeight + compY - 19, 0); //paintBall g.drawImage(tinyBall, offsetWidth + xBall, offsetHeight + freeBallY - yBall, 0); // paints the buffered image if (g != saved) { saved.drawImage(offscreen, 0, 0, Graphics.LEFT | Graphics.TOP); } }
Handling Call Interrupts
When an incoming call occurs in the middle of the game, the
Canvas screen might disappear, so we might want to
freeze the game state (save all of the data regarding the players'
positions, ball position, number of point, time left, etc.). In our game, the freezing of the game is done by stopping the
timer. There are two functions related to the canvas disappearing
and reappearing.
hideNotify() is called after the
Canvas disappears and
showNotify() is
called when the
Canvas reappears. We stop the timer in
the
hideNotify() event and we reactivate the timer in the
showNotify() event.
/** * called when the screen disappears */ protected void hideNotify() { if (finishGame == 0) { parent.setCurrent("MainMenu"); } // stops the internal timer and thus // freezes the game. stopTimer(); } /** * called when the screen reappears */ protected void showNotify() { // restarts the internal timers. startTimer(); }
Saving Persistent Data
J2ME applications have a method to store data even after the
user has terminated the application. We manage this data with the
"">
class. In our application, we need to storeclass. In our application, we need to store
RecordStore
persistent data in order to enable the player to stop the game at
any given moment, exit the application, and return to the game some
time later and to continue exactly from the moment it stopped. The
data that we need to store includes: time left for game,
coordinates of the two player icons, ball coordinates, etc. We
arrange all of this data in a
byte[]array, and only after
that we can store it.
/** * Writes all the game data into recordstore * @param rec */ public void writeRMS(byte[] rec) { try { rs = RecordStore.openRecordStore("pocket", true); if (rs.getNumRecords() > 0) rs.setRecord(1, rec, 0, 31); else rs.addRecord(rec, 0, 31); rs.closeRecordStore(); } catch (Exception e) {} } /** * Reads the data from the recordstore * @return */ public byte[] readRMS() { byte[] rec = new byte[31]; try { rs = RecordStore.openRecordStore("pocket", true); rec = rs.getRecord(1); rs.closeRecordStore(); } catch (Exception e) {} return rec; } /** * * Deletes all the record stores */ public void deleteRMS() { if (RecordStore.listRecordStores() != null) { try { RecordStore.deleteRecordStore ("pocket"); } catch (Exception e) {} } }
Other Screens
Two other screen in our application are
InstructionsForm (shown in Figure 9) and
AboutForm. We extend these two classes from the
Form class and also implement
CommandListener in order to handle key presses.
Form class is part of the "high-level" API, and allows
us to easily insert plain text, images, and other items to be
displayed.
Figure 9. Instruction screen
public class InstructionsForm extends Form implements CommandListener { private TestMidletMIDlet parent; private Command mainMenu = new Command("Back", Command.BACK, 1); public InstructionsForm(TestMidletMIDlet parent) { super("Instructions"); addCommand(mainMenu); setCommandListener(this); // insert some text to be seen. this.append("The objective of this game is to shoot as many baskets as possible while preventing your opponent shoot to your basket.\n\n"); this.append("move your player using 4 for moving left, 2 for moving up, 6 for moving right and 8 for moving down.\n\n"); this.append("press 5 to throw the ball"); this.parent = parent; } public void commandAction(Command c, Displayable d) { if (c == mainMenu) { parent.setCurrent("MainMenu2"); } } }
Conclusion
In this article, we have discussed some of the most common
features in J2ME environment. These features include the
MIDlet class, which is the base class for all J2ME
applications, the low-level API's
Canvas class, and
high-level API classes such as
List and
Form. We also covered the organizational structure of
the game, such as the typical screens of the application.
As I mentioned before, this is a brief description of a typical
J2ME game. Although games are abundant in the handheld environment,
there are many other uses for ME applications, such as stock quote
readers, RSS readers, etc.
Resources
- Source code files and PNG images of
the game.
- Executable files
- "">
J2ME Wireless Development Tutorial
- "">
The
Canvasclass
- "">
Event handling by
CommandListener
- "">Running
threads by
Timerand
TimerTask
- "">Building
menus by
Listclass
- "">
Object
Command
- "">
class
RecordStore
- Login or register to post comments
- Printer-friendly version
- 10526 reads | https://today.java.net/pub/a/today/2006/04/25/writing-cool-games-for-mobile-devices.html?page=3 | CC-MAIN-2014-10 | refinedweb | 3,732 | 64 |
File upload plays an integral part in many web applications. It is used in programs such as email clients, chat applications, commenting systems, among others.
Before JavaScript frameworks dominated web development, file upload systems were similar. These systems usually comprise a form with a file input. After the form is submitted, the backend receives the files, stores them and redirects the user’s page. With the popularity of JavaScript frameworks today, the situation is different. File uploads nowadays can have several features. Some of these are AJAX submissions, progress bars, pause and continue features.
The finished code for this tutorial can be found at these Github repositories:
Application Architecture
In this article, we will be building a public file upload and sharing service. It will have a Node.js-powered backend and a Vue.js-powered frontend. The service will be used anonymously and won’t have any authenticated users. We will submit the file through AJAX and store it in the backend filesystem.
The file meta-data information will be stored in a MongoDB database. All the uploaded files will be listed on the homepage of the application. Upon clicking the name of the file, we will make a request to the backend to download the file. We will also be able to delete files.
We will include a URL-shortener feature in the application to make it easier to share links. This will be achieved by generating a unique hash for each uploaded file. We will also add a mechanism to restrict which file types can be uploaded with the application.
Install and Configure Packages
Before we can install any of the packages, we will need a few things. We must have Node.js installed on our system along with the MongoDB database.
Now that we have a clear understanding of the application requirements, let's start building it. The application will have a separate backend and frontend. Each will be in a separate folder. Create two folders named
client and
server in the same directory. Move to the
server folder and initialize a new
Node.js application with:
npm init
Accept the default values when prompted. Next, move out of the
server folder to the level where both folders reside. Initialize a new
Vue.js application with:
vue init webpack client
Accept the default values as well. When asked to install the router plugin, select yes. Now that we have scaffolded the frontend and backend, let’s install the required packages, starting with the frontend. Move into the
client folder and install the
Axios.js package using the following command:
npm install --save axios
Navigate again to the
server folder and, install the packages:
npm install --save btoa body-parser express mongoose multer
Let’s outline the purpose of each of the packages:
btoa: this will help us create a unique hash for a file so we can have a URL shortner functionality.
body-parser: this makes it easy for the backend to access parameters from the frontend.
express: this is the main backend framework built on top of
Node.js
mongoose: this is an ORM library. It helps to insert and manipulate data using a MongoDB database.
multer: this is a library which allows us to receive and store files in the backend.
List Uploaded Files
Now that the packages are installed, we will list the files from the backend. For now, we do not have any uploaded files yet but we will get to that later. In the
server folder, there should be an
index.js file. If it is not present, create it. In there, import several libraries by pasting the following:
const bodyParser = require('body-parser'); const express = require('express'); const app = express(); app.listen(3000, () => { console.log('Server started on port : ' + 3000); });
Start the
Node.js server using:
node index.js
There should be a message in the console without any errors. The message should read:
Server started on port: 3000
Next, create a file in
models/file.js. In there paste in the following:
const mongoose = require('mongoose'); const Schema = mongoose.Schema; let FileSchema = new Schema({ name: { type: String, required: true, max: 100 }, encodedName: { type: String, required: false, max: 100, default: null } }); module.exports = mongoose.model('file', FileSchema, 'files');
Here, we are using Mongoose.js to create the model to represent a single file. This is what we will use to query the database for uploaded files. To use it, we only have to import the exported module from the file.
Next, let's create a service file. This is where our logic for querying the database will be. It will also contain the connection information for the MongoDB database. Still in the
server directory, create a file in
services/file.service.js. In there, paste the following:
const mongoose = require('mongoose'); const File = require('../models/file'); const multer = require('multer'); const async = require('async') const fs = require('fs') const path = require('path') const btoa = require('btoa')
In the lines above, we are requiring several libraries. We have also included extra native Node.js libraries:
async,
fs and
path. The function for
async is to perform many asynchronous operations. When all operations are complete, we have one success callback.
fs is used to create, delete and manipulate local files. Finally, we use
path to create folder paths in a safe way depending on the environment.
Let's now connect to the database and write our method to fetch all uploaded files information in the database. Note that we will only store file information in the database. The physical file itself will be in a folder somewhere in the server. In the same file, paste in the following:
const fileConfig = require('../config/file.config') const mongoDB = fileConfig.dbConnection; mongoose.connect(mongoDB, { useNewUrlParser: true }); mongoose.Promise = global.Promise;
This connects to our database and requires the configuration information for our file service. The file does not exist yet, so let's create it in
config/file.config.js. Paste in the following:
module.exports = { supportedMimes: { 'text/csv': 'csv' }, uploadsFolder: 'uploads', dbConnection: 'mongodb://127.0.0.1:27017/fileuploaddb' }
The
supportedMimes config will enable us to restrict which file types we will allow for uploads. The keys in the object are the mime types and the values are the file extensions.
The
uploadsFolder configuration is used to specify the directory name for uploaded files. It is relative to the server root.
In the
dbConnection configuration, we are specifying the connection string for our database. The Mongoose library will create the database if it does not exist.
Finally, let us create a method for querying the files. Paste in the following into our
file.service.js file:
module.exports = { getAll: (req, res, next) => { File.find((err, files) => { if (err) { return res.status(404).end(); } console.log('File fetched successfully'); res.send(files); }); } }
This exports an object with a method called
getAll which fetches a list of files from the database. For now, the method only exists but isn't connected to any route so the frontend has no way of accessing it yet. Let's build our first route to fetch uploaded files.
Create a route file in
routes/api.js. Add in the following:
const express = require('express'); const router = express.Router(); const fileService = require('../services/file.service.js'); const app = express(); router.get('/files', fileService.getAll); module.exports = router;
Before we start the server again, let’s paste the following into
index.js:
app.use(bodyParser.json()) const apiRoutes = require('./routes/api'); app.use('/api', apiRoutes);
Before visiting the route, we need one more step. With our current setup, MongoDB database needs to be running on port
27017. This port is the default port when the server is started without any arguments. To start the server with the default port run the command:
mongod
To start it with a specific port, use the command:
mongod --port portnumber
If you specify a port number, do not forget to update the port number in the config file,
config/file.config.js in this line
dbConnection: 'mongodb://127.0.0.1:27017/fileuploaddb'
Now, the route is ready to serve files from the backend. The registered route will live at the location
localhost:3000/api/files. We do not have any files in the backend yet. If we visit the URL in the browser, we will get an empty array response. In the backend console, we should notice a message titled:
File fetched successfully.
Do not forget to restart the
Node.js server.
Build backend API for receiving files
At this stage, the backend application is able to connect to the database. Next, we will build the route API for receiving one or more files. First, we will only store the file locally. In the file
services/file.service.js, alongside the
getAll method, add the following:
uploadFile: (req, res, next) => { }
This will receive the files but will not insert any information in the database yet. We will get to that in the next chapter.
In
routes/api.js, add in the following line before the first route declaration:
const options = fileService.getFileOptions() const multer = require('multer')(options); router.post('/upload', multer.any(), fileService.uploadFile);
Here, we are including the
multer library and providing some options for it. During the upload route declaration:
router.post('/upload', multer.any(), fileService.uploadFile);
We are specifying the
multer library as a middleware. This is so that it will intercept uploaded files and do some filtering for unaccepted files. The options for the library do not exist yet. Let's add them in a method in the file
services/file.service.js. Add in the method below:
getFileOptions: () => { return { storage: multer.diskStorage({ destination: fileConfig.uploadsFolder, filename: (req, file, cb) => { let extension = fileConfig.supportedMimes[file.mimetype] let originalname = file.originalname.split('.')[0] let fileName = originalname + '-' + (new Date()).getMilliseconds() + '.' + extension cb(null, fileName) } }), fileFilter: (req, file, cb) => { let extension = fileConfig.supportedMimes[file.mimetype] if (!extension) { return cb(null, false) } else { cb(null, true) } } } }
This method returns some configuration for filename construction. It will set the destination for the uploaded file. Then it filters files so that we only upload the ones specified in the file
config/file.config.js.
We cannot test the upload functionality with our current setup because we have not written the frontend yet. There is a tool called postman. It is designed exactly for that.
Store File meta-data in the database
Currently, the API route for receiving files only stores the file locally. Let's modify the application so it also stores the file meta-data in the database. In the file
services/file.service.js, modify the
uploadFile method like below:
uploadFile: (req, res, next) => { let savedModels = [] async.each(req.files, (file, callback) => { let fileModel = new File({ name: file.filename }); fileModel.save((err) => { if (err) { return next('Error creating new file', err); } fileModel.encodedName = btoa(fileModel._id) fileModel.save((err) => { if (err) { return next('Error creating new file', err); } savedModels.push(fileModel) callback() console.log('File created successfully'); }) }); }, (err) => { if (err) { return res.status(400).end(); } return res.send(savedModels) }) }
After the
multer library stores the files (locally) it will pass the file list to the callback above. The callback will create a unique hash for each file; then, it will store the file's original name and hashed key in the database.
The async part is necessary because the meta-data insertion happens asynchronously for each file. We want to return a response to the frontend only when all the information has been saved.
Any file which fails the filter test of the
multer middleware will not be passed to the
uploadFile callback. If no files have been uploaded, we will return an empty array to the frontend. We can then deal with any validation however we wish.
Build Frontend for listing Files
Now let's add functionality to our frontend so that it can list files from the backend. Navigate to the frontend folder. Start the development server using the command below:
npm run dev
The frontend application will be running on the URL
localhost:8080.
The first thing we need to do is allow the frontend to be able to send AJAX requests to the backend. To allow this during development, in the client root folder, let's modify the file
config/index.js. Modify the
proxyTable key to:
proxyTable: { '/api': '', '/file': '' },
We have covered more details about the above configuration in a previous article, so check that if you’re facing any difficulties.
Let us create a component to list files. This will be responsible for fetching files from the backend. It will also loop over the list of returned files and create many instances of a child component called
UploadedFile, which we will create later.
To begin with, create a component in
src/components/UploadedFilesList.vue.
In there, paste in the following:
<template> <div> <h1>Files List</h1> <ul> <uploaded-file</uploaded-file> </ul> </div> </template> <script> import axios from "axios"; import UploadedFile from "./UploadedFile"; export default { name: "UploadedFilesList", data() { return { files: [] }; }, components: { UploadedFile }, methods: { fetchFiles() { let self = this; axios .get("/api/files") .then(response => { this.$set(this, "files", response.data); }); } }, mounted() { this.fetchFiles(); } }; </script> <!-- Add "scoped" attribute to limit CSS to this component only --> <style scoped> </style>
To list files, we are sending a request to the backend when the component is mounted. This is done using
Axios.js, an HTTP library for Javascript. The component is initially created with an empty list of files. When the files list data is returned, we add it to the list of files.
Let's create the
UploadedFile component.Create a component file in
src/components/UploadedFile.vue. In there, paste in the following:
<template> <div> <div> <a>{{ file.name }}</a> <button>Delete</button> </div> </div> </template> <script> import axios from "axios"; export default { props: ["file"], data() { return {}; }, name: "UploadedFile", methods: {} }; </script> <style scoped> </style>
All that this component is currently doing is display the file name. The delete button does not currently perform any action but we will get to that later.
Next, let's configure the router for our application so we can display the list of files.
In the
src/router/index.js file, modify the router as shown below:
import Vue from 'vue' import Router from 'vue-router' import Main from '@/components/Main' Vue.use(Router) export default new Router({ routes: [ { path: '/', name: 'Main', component: Main } ] })
The router is referencing the
Main component file which does not exist yet. Let's create it in
src/components/Main.vue. Paste in the following:
<template> <div> <h1>Anonymous File Uploader System</h1> <div> <uploaded-files</uploaded-files> </div> </div> </template> <script> import axios from "axios"; import UploadedFiles from "@/components/UploadedFilesList"; export default { name: "Main", data() { return { files: [] }; }, components: { UploadedFiles }, methods: {} }; </script> <style scoped> h1, h2 { font-weight: normal; } ul { list-style-type: none; padding: 0; } li { display: inline-block; margin: 0 10px; } a { color: #42b983; } </style>
Delete the existing file
src/components/Hello.vue. It was created during the scaffolding stage and we will not need it. Bootup the frontend development server using
npm run dev. If there aren't any files in the backend, the list will be empty. If all goes well, we should see the text:
Files List
Build Frontend for Uploading Files
At this stage, we can list files from the server. Our next task is to add the ability to upload files.
First, create a component for uploading files in
src/components/UploadsContainer.vue. In there, paste the following:
<template> <div class="hello"> <h1>Uploader</h1> <div> <input v- <p v-Uploading...</p> </div> <div> <button v-Start Upload</button> <button v-Cancel Upload</button> </div> </div> </template> <script> import axios from "axios"; const CancelToken = axios.CancelToken; const source = CancelToken.source(); export default { name: "Main", data() { return { uploadStarted: false, uploadName: "files", uploadUrl: "/api/upload", formData: null }; }, methods: { fileSelected(event) { if (event.target.files.length === 0) { return; } let files = event.target.files; let name = event.target.name; let formData = new FormData(); for (let index = 0; index < files.length; index++) { formData.append(name, files[index], files[index].name); } this.$set(this, "formData", formData); }, startUpload() { this.$set(this, "uploadStarted", true); this.uploadData(this.formData); }, cancelUpload() { if (this.uploadStarted) { source.cancel(); } this.$set(this, "uploadStarted", false); }, uploadData(formData) { if (this.formData === null) { return; } axios .post(this.uploadUrl, formData, { cancelToken: source.token }) .then(response => { if (response.data.length === 0) { alert("File not uploaded. Please check the file types"); return; } this.updateFilesList(response.data); this.$set(this, "formData", null); }) .catch(() => { alert("Error occured"); }) .then(() => { this.$set(this, "uploadStarted", false); }); }, updateFilesList(files) { this.$emit("files-uploaded", files); } } }; </script> <style scoped> </style>
Add it to the dependencies of
src/components/Main.vue as shown below. First, let us import it:
import UploadsContainer from '@/components/UploadsContainer'
Then we list it as a child component:
components: { UploadedFiles, UploadsContainer },
Add a method as shown below:
methods: { filesUploaded(files) { this.$refs.filesList.filesUploaded(files) } }
Then, instantiate it in the template as shown below:
<template> <div class="hello"> <h1>Anonymous File Uploader System</h1> <div> <uploads-container v-on:</uploads-container> </div> <div> <uploaded-files</uploaded-files> </div> </div> </template >
In the component
src/components/UploadedFilesList.vue, add a method as below:
filesUploaded(files) { files.forEach(file => { this.files.push(file) }) }
Let us break down what is happening in these components.
Inside
src/components/UploadsContainer, we have a file upload input. Attached to it is a changed event handler called
fileSelected:
@change="fileSelected"
When a file is selected, this handler is fired. The logic in this handler sets the selected files as a property in the component using the following:
let formData = new FormData() for (let index = 0; index < files.length; index++) { formData.append(name, files[index], files[index].name) } this.$set(this, 'formData', formData)
This is using HTML5's native FormData API.
Then we have a submit button:
<button v-Start Upload</button>
This calls a method named
startUpload which is responsible for setting the status as actively uploading. Then, it calls another method which sends the
formData property, containing the files to the backend.
If the upload was successful, we set the
formData to null. Then, we emit an event to the parent container so it can update the uploaded files list using:
updateFilesList (files) { this.$emit("files-uploaded", files) }
If an error occurs, we show an alert to the user. We also have a cancel feature which will be triggered by the cancel button below:
<button v-Cancel Upload</button>
And it will only show when an upload process has started. The “start upload” button will only display when there is no upload in progress.
We are binding the form field to a property which specifies the key that will be used when sending files to the server:
v-bind:name="uploadName"
The input field will also be hidden when an upload is in progress.
Onto the next file
src/components/Main.vue. After instantiating
UploadsContainer, we listen to an event using the syntax:
v-on:files-uploaded="filesUploaded"
This will receive the uploaded files so we can pass it to a method named
filesUploaded in the component
src/components/UploadedFilesList.vue. This will make sure the list is updated.
Add support for file download
Frontend download setup
Now that we have the ability to upload files, let's make sure we can download them.
First, create a component in
src/components/FileDownloader.vue.
In there, paste the following:
<template> <iframe v-bind:</iframe> </template> <script> export default { data() { return { source: "" }; }, methods: { downloadFile(source) { this.$set(this, "source", source); } } }; </script>
This component includes an iframe in the template. Anytime the source for the iframe changes, it will make a request to that URL.
In the component
src/components/UploadedFile.vue, include the downloader:
import FileDownloader from './FileDownloader'
Let us register it first:
components: { FileDownloader },
Then we can use it in the template:
<file-downloader :</file-downloader>
Add a method:
downloadFile(event) { event.preventDefault() let url = event.target.href this.downloadKey += 1 this.$nextTick().then(() => { this.$refs.downloader.downloadFile(url) }) }
Then, modify the link in the template as shown below:
<a v-bind:{{ file.name }}</a>
This generates the appropriate URL by binding to the
encodedName property of our file props.
Let's make sure that the download is triggered on every click. We have to bind the download component's key to a data property on the parent component.
Add a data property in
src/components/UploadedFile.vue:
return { downloadKey: 1 }
This key is incremented on each click of the download link. This forces the iframe to rerender and hence triggers the download.
Backend download setup
Now, the frontend is ready for making download requests. However, the backend has not been set up to serve the files yet. Let's set it up now. In the backend file
index.js, add the following lines before the call to start the server:
const fileRoute = require('./routes/file'); app.use('/file', fileRoute);
Next, create the route file
routes/file.js. In there, add the content:
const express = require('express'); const router = express.Router(); const fileService = require('../services/file.service.js'); router.get('/download/:name', fileService.downloadFile); module.exports = router;
This sets up a route which accepts the hashed key of a file as an argument. This argument is then used to fetch the file from the database to get the real name of the file. Then, we reply with a download response.
Let’s set up the method handler for the route. Inside the file
services/file.service.js, add a method to the exports as shown:
downloadFile(req, res, next) { File.findOne({ name: req.params.name }, (err, file) => { if (err) { res.status(400).end(); } if (!file) { File.findOne({ encodedName: req.params.name }, (err, file) => { if (err) { res.status(400).end(); } if (!file) { res.status(404).end(); } let fileLocation = path.join(__dirname, '..', 'uploads', file.name) res.download(fileLocation, (err) => { if (err) { res.status(400).end(); } }) }) } }) }
When we restart the backend server, any file link on the frontend can now be clicked to download that file.
Add Frontend support for deleting files
Finally, let's add functionality to delete files. Let's work on the frontend first. In the frontend file
src/components/UploadedFile.vue, add the method below:
deleteFile (file) { this.$emit("delete-file", file); },
Modify the delete button in the component to the following:
<button v-on:Delete</button>
Upon clicking the button, the component emits an event called
delete-file to the parent.
Let's modify the parent component
src/components/UploadedFilesList.vue. Modify the
UploadFile instantiation to the following:
<uploaded-file</uploaded-file>
In there, we add an event listener for the emitted child event we just made. This in turn calls a method named
deleteFile in the parent. Let's create that method:
deleteFile(file) { if (confirm('Are you sure you want to delete the file?')) { axios.delete('/api/files/' + file._id) .then(() => { let fileIndex = this.files.indexOf(file) this.files.splice(fileIndex, 1) }) .catch(() => { console.log("Error deleting file") }) }
The frontend is ready to send AJAX requests to the backend.
Let's set up the backend to receive the request. In the backend file
routes/api.js, add the following line just before the export statement:
router.delete('/files/:id', fileService.deleteFile);
Then, in the file
services/file.service.js, add the method below:
deleteFile(req, res, next) { File.findOne({ id: req.params._id }, (err, file) => { if (err) { res.status(400).end(); } if (!file) { res.status(404).end(); } let fileLocation = path.join(__dirname, '..', 'uploads', file.name) fs.unlink(fileLocation, () => { File.deleteOne(file, (err) => { if (err) { return next(err) } return res.send([]) }) }) }) },
Now, we can delete files. When we click the delete link, we get an alert to confirm. If we click “ok”, the file is deleted from the backend folder and the information is removed from the database.
Conclusion
That brings us to the end of our article. We created a file upload service which is capable of many file uploads. It enables us to delete the files and we can download the file as well.
This is only a basic upload application. Possible expansions to this application could be advanced validation, upload progress, image preview feature, or multiple file downloads. Hopefully, this brought you some inspiration and ideas. As usual, if there are any questions, please tweet them directly to the author at @LaminEvra.
Also, if you're building Vue applications with sensitive logic, be sure to protect them against code theft and reverse-engineering by following our guide. | https://blog.jscrambler.com/how-to-create-a-public-file-sharing-service-with-vue-js-and-node-js/ | CC-MAIN-2019-26 | refinedweb | 4,077 | 51.04 |
Red Hat Bugzilla – Full Text Bug Listing
Description of problem:
After an apparently successful upgrade from fc6 to f7, the control-center
package was not installed/upgraded. Control-center was installed in fc6 before
the upgrade, so the fc6 control-center files were still on the filesystem and
still available in the gnome menu. Choosing the Desktop Background menu item
caused a Bug Buddy crash, for example. Installing the f7 version of
control-center solved the problem.
Version-Release number of selected component (if applicable):
FC6 fully updated, then using Fedora-7-i386 DVD final release to upgrade.
control-center.i386 1:2.18.0-18.fc7 was installed to solve problem.
How reproducible:
Not sure. Probably always.
Steps to Reproduce:
1. Have FC6 i386 installed, fully updated, control-center installed, on an IBM
Thinkpad T43.
2. Upgrade to F7 using Fedora-7-i386 DVD (media verified ok)
3. Create a new user to avoid stale gnome profile data
4. Log in to gnome session
5. Select menu System->Preferences->Look and Feel->Desktop Background
6. Notice Bug Buddy crash
7. Open Terminal, run gnome-background-properties
8. See this error:
(gnome-background-properties:2664): libglade-WARNING **: could not find glade
file '/usr/share/applications/gnome-background-properties.glade'
(gnome-background-properties:2664): libglade-CRITICAL **: glade_xml_get_widget:
assertion `self != NULL' failed
...etc...
Actual results:
See steps 6 and 8 above
Expected results:
See Desktop Wallpaper change dialog
Additional info:
# rpm -q control-center
package control-center is not installed
# yum install control-center
...
--> Running transaction check
---> Package control-center.i386 1:2.18.0-18.fc7 set to be updated
--> Processing Dependency: libgnomekbd.so.1 for package: control-center
--> Processing Dependency: libgnomekbdui.so.1 for package: control-center
--> Restarting Dependency Resolution with new changes.
--> Running transaction check
---> Package libgnomekbd.i386 0:2.18.0-1.fc7 set to be updated
...
Notice libgnomekbd also installed as dependency.
After installing control-center.i386 1:2.18.0-18.fc7, the Desktop Background app
works as expected.
Could you please attach the upgrade logs (which should have been added to your
/root)? We'd then see why it wasn't upgraded.
Here's the relevant part. Full attachment next.
Upgrading control-center - 1:2.18.0-18.fc7.i386
I/O warning : failed to load external entity
"/etc/gconf/schemas/control-center.schemas"
Failed to open `/etc/gconf/schemas/control-center.schemas': No such file or
directory
error: unpacking of archive failed on file
/etc/gconf/schemas/apps_gnome_settings_daemon_default_editor.sc
hemas: cpio: lsetfilecon
Created attachment 156093 [details]
Upgrade log from fc6-f7 upgrade
There are lots of these types of errors in the upgrade log. What would cause
this? I should have had plenty of disk space available. It looks limited to
/etc/security/console.apps and /etc/gconf/schemas.
# bzcat upgrade.log.bz2 | grep ^error
error: unpacking of archive failed on file
/etc/security/console.apps/system-config-printer: cpio: lsetfilecon
error: unpacking of archive failed on file
/etc/security/console.apps/serviceconf: cpio: lsetfilecon
error: unpacking of archive failed on file /etc/security/console.apps/setup:
cpio: lsetfilecon
error: unpacking of archive failed on file
/etc/gconf/schemas/desktop_gnome_file_sharing.schemas: cpio: lsetfilecon
... etc 49 lines total ...
Maybe an unclean file system, I'm not sure. Reassigning to anaconda, as it seems
related.
What's the output of
ls -lRZ /etc/gconf/schemas /etc/security/console.apps
Created attachment 156123 [details]
ls -lRZ /etc/gconf/schemas /etc/security/console.apps
Maybe some unlabeled files? I had selinux disabled via "selinux=0" on the
kernel boot line in fc6.
Now in f7, the kernel boot line is removed, and selinux is disabled via
/etc/selinux/config:
$ cat /etc/selinux/config | grep ^SELINUX=
SELINUX=disabled
Did you boot the installer with 'selinux=0'? And had you changed
/etc/selinux/config prior to starting the upgrade?
If no to both, then we were probably doing the upgrade as though selinux had
been enabled and then getting very unhappy due to the labeling problems. Dan --
is there any way we can detect this?
import selinux
if selinux.is_selinux_enabled() < 1:
Will tell you if selinux is disabled.
No, I didn't boot the installer with selinux=0.
Yes, afaik /etc/selinux/config was set to disabled before the upgrade.
As an aside, isn't it a bit odd to do an os install under selinux? Are we
expecting security protection while installing the os?
I can say that this problem manifests itself in odd problems. For example,
gnome-panel was also not upgraded and all of the panel applets refused to start
giving errors such as:
"OAFIID:GNOME_ClockApplet libpanel-applet-2.so.0: cannot open shared object
file: No such file or directory"
(In reply to comment #9)
> Will tell you if selinux is disabled.
Sure, that will tell me about _now_. But it won't let me find out that someone
has been booting their install with selinux=0 forever but didn't boot anaconda
with it. Pain lies this way...
(In reply to comment #10)
> As an aside, isn't it a bit odd to do an os install under selinux? Are we
> expecting security protection while installing the os?
We have to have SELinux enabled (though in permissive) when doing the upgrade so
that we can set file contexts on things being installed. I'm not quite sure why
we'd be getting EPERM with lsetfilecon() in this case, though :-/
Maybe anaconda could look at the existing grub.conf for selinux=0, or optionally
scan/relabel the filesystem looking for unlabeled files before install? Have we
determined that unlabeled files caused this problem?
If there is a /.autorelabel file you could do a getfilecon on it. If it does
not have a file context it is a good indicatory selinux=0.
Init scripts create /.autorelabel any time you boot selinux=0. So the file will
get created without a file context. The scripts do this, so the first time you
boot selinux=1 a relabel will happen.
$ ls -l /.autorelabel
-rw-r--r-- 1 root root 0 2006-05-17 14:56 /.autorelabel
Maybe the install process skipped the relabel?
$ ls -lZ /.autorelabel
-rw-r--r-- root root /.autorelabel
No the autorelabel is used when you boot with selinux=1. The installer
currently ignores it.
Wouldn't that have prevented this problem? Maybe the installer should do a
relabel too?
fyi, these packages were not upgraded as a result of this issue. I grepped the
upgrade.log file to install the missing ones.
Installing:
bug-buddy i386 1:2.18.0-2.fc7 fedora 443 k
compiz i386 0.3.6-8.fc7 fedora 547 k
devhelp i386 0.13-8.fc7 updates 185 k
eog i386 2.18.0.1-2.fc7 fedora 1.2 M
evince i386 0.8.0-5.fc7 fedora 1.1 M
evolution i386 2.10.1-4.fc7 fedora 29 M
file-roller i386 2.18.1-1.fc7 fedora 966 k
gcalctool i386 5.9.14-1.fc7 fedora 1.1 M
gdm i386 1:2.18.0-14.fc7 fedora 4.3 M
gedit i386 1:2.18.0-3.fc7 fedora 4.2 M
gnome-applet-vm i386 0.1.2-2.fc7 fedora 76 k
gnome-applets i386 1:2.18.0-7.fc7 fedora 10 M
gnome-bluetooth i386 0.8.0-4.fc7 fedora 249 k
gnome-games i386 1:2.18.1.1-1.fc7 fedora 8.5 M
gnome-media i386 2.18.0-3.fc7 fedora 2.5 M
gnome-netstatus i386 2.12.1-1.fc7 fedora 298 k
gnome-pilot i386 2.0.15-5.fc7 fedora 599 k
gnome-power-manager i386 2.18.2-4.fc7 fedora 2.8 M
gnome-screensaver i386 2.18.0-13.fc7 fedora 1.8 M
gnome-session i386 2.18.0-7.fc7 fedora 476 k
gnome-terminal i386 2.18.0-1.fc7 fedora 2.3 M
gnome-user-share i386 0.11-2.fc7 fedora 44 k
gnome-utils i386 1:2.18.0-1.fc7 fedora 4.9 M
gnome-volume-manager i386 2.17.0-7.fc7 fedora 431 k
gstreamer-plugins-good i386 0.10.5-6.fc7 fedora 652 k
gthumb i386 2.10.2-3.fc7 fedora 2.2 M
nautilus i386 2.18.1-2.fc7 fedora 4.3 M
nautilus-cd-burner i386 2.18.0-2.fc7 fedora 503 k
nautilus-sendto i386 0.10-4.fc7 fedora 80 k
pirut noarch 1.3.7-1.fc7 fedora 265 k
planner i386 0.14.2-4.fc7 fedora 3.5 M
setuptool i386 1.19.2-2 fedora 51 k
sound-juicer i386 2.16.4-1.fc7 fedora 1.1 M
system-config-date noarch 1.9.0-1.fc7 fedora 1.1 M
system-config-printer i386 0.7.63.1-1.fc7 fedora 173 k
system-config-services noarch 0.9.8-1.fc7 fedora 188 k
system-config-soundcard noarch 2.0.6-5.fc7 fedora 1.1 M
vino i386 2.18.0-1.fc7 fedora 411 k
virt-manager i386 0.4.0-2.fc7 fedora 1.3 M
Fix for this committed to CVS
Fix confirmed upgrading F7 to F8. | https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=242510 | CC-MAIN-2017-22 | refinedweb | 1,548 | 55.1 |
I'm trying to compile a Android Kernel from source and I believe I have downloaded all the right packages to do it but for some reason I get this error --->
arm-linux-androideabi-gcc: error: unrecognized command line option '-mgeneral-regs-only'
/home/livlogik/android/kernel/H901BK_L_Kernel/./Kbuild:35: recipe for target 'kernel/bounds.s' failed
make[1]: *** [kernel/bounds.s] Error 1
Makefile:858: recipe for target 'prepare0' failed
make: *** [prepare0] Error 2
As it can be seen from build error message:
drivers/media/platform/msm/camera_v2/sensor/msm_sensor.c:20:27: fatal error: ./mh1/msm_mh1.h: No such file or directory
#include <./mh1/msm_mh1.h>
compiler just can't find
msm_mh1.h file. This is because the path specified for
#include directive isn't correct. Most probably it's typo: instead
./ there should be
../.
To fix that error, in
drivers/media/platform/msm/camera_v2/sensor/msm_sensor.c file change this line:
#include <./mh1/msm_mh1.h>
to this line
#include "../mh1/msm_mh1.h"
After this
make command should work fine. Also, kernel image file will be available at
arch/arm64/boot, and it's not
zImage as stated in documentation, it's actually
Image.gz. Uncompressed kernel image is
Image file.
Answering your question in comments:
Is there any way to make it compress into a zImage?
From Documentation/arm64/booting.txt:
The AArch64 kernel does not currently provide a decompressor and therefore requires decompression (gzip etc.) to be performed by the boot loader if a compressed
Imagetarget (e.g.
Image.gz) is used. For bootloaders that do not implement this requirement, the uncompressed
Imagetarget is available instead.
Basically
zImage is just gzipped and self-extracted
Image. So
zImage file consists of program for unpacking gzip archive in the beginning, followed by gzipped
Image, and when kernel is run by bootloader its unpacking itself (hense "self-extracted" term) and then start running.
...So I can make it flashable
In case of arm64, you don't have
zImage, so most likely you need to use
Image file (which acts in the same way, but only its size is bigger). You can create
boot.img from
Image file and built AFS ramdisk (using
mkbootimg tool) and then just do
fastboot flash boot boot.img. Refer to this documentation for example. Of course for your platform some things can be different, so try to find instructions for your platform. | https://codedump.io/share/FNqucMbwLOvC/1/error-while-trying-to-compile-android-kernel-in-ubuntu | CC-MAIN-2017-04 | refinedweb | 398 | 59.3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.