text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
#include <random>
#include "util.h"
#include "sketch.h"
#include "int_utils.h"
Go to the source code of this file.
Definition at line 336 of file sketch_impl.h.
Definition at line 281 of file sketch_impl.h.
Compute the quotient of a polynomial division of val by mod, putting the quotient in div and the remainder in val.
Definition at line 38 of file sketch_impl.h.
Returns the roots of a fully factorizable polynomial.
This function assumes that the input polynomial is square-free and not the zero polynomial (represented by an empty vector).
In case the square-free polynomial is not fully factorizable, i.e., it has fewer roots than its degree, the empty vector is returned.
Definition at line 263 of file sketch_impl.h.
Definition at line 346 of file sketch_impl.h.
Compute the GCD of two polynomials, putting the result in a.
b will be cleared.
Definition at line 76 of file sketch_impl.h.
Make a polynomial monic.
Definition at line 62 of file sketch_impl.h.
Compute the remainder of a polynomial division of val by mod, putting the result in mod.
Definition at line 18 of file sketch_impl.h.
One step of the root finding algorithm; finds roots of stack[pos] and adds them to roots.
Stack elements >= pos are destroyed.
It operates on a stack of polynomials. The polynomial operated on is
stack[pos], where elements of
stack with index higher than
pos are used as scratch space.
stack[pos] is assumed to be square-free polynomial. If
fully_factorizable is true, it is also assumed to have no irreducible factors of degree higher than 1.
This implements the Berlekamp trace algorithm, plus an efficient test to fail fast in case the polynomial cannot be fully factored.
Definition at line 128 of file sketch_impl.h.
Definition at line 325 of file sketch_impl.h.
Square a polynomial.
Definition at line 92 of file sketch_impl.h.
Compute the trace map of (param*x) modulo mod, putting the result in out.
Definition at line 102 of file sketch_impl.h.
|
https://doxygen.bitcoincore.org/sketch__impl_8h.html
|
CC-MAIN-2022-40
|
refinedweb
| 338
| 62.04
|
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hi people! I'm just getting crazy trying to solve this issue, nobody else seems to encounter the same problem. I'm writing a simple android application (Client) which sends a string using a 'Socket' once a button is pressed. Then Processing on my laptop (Server) is going to receive and handle it. The problem occurs in Processing when the android app closes the socket with 'socket.close()'. Processing returns the following exception: SocketException Client got end-of-stream. If I don't close the socket everithing works fine and no errors seem to appear, but when I close the java window processing gives as many exception as the sockets opened during the java applet life and closes them all: I think that this is not good. Then everyone say that sockets should be closed.
Here is the app code (Client):
public void onClick(View view) { t = T.getText().toString(); if (view.getId() == R.id.send_button) { //check button through its id and set string to send. msg = t+" "+string1; } serverIpAddress = serverIp.getText().toString(); serverInPort = Integer.parseInt(serverPort.getText().toString()); if (!serverIpAddress.equals("") && serverInPort != 0) { sendMsg(msg); } } public void sendMsg(String msg) { // Function to send TCP string try { InetAddress serverAddr = InetAddress.getByName(serverIpAddress); Log.d("MainActivity", "C: Connecting..."); Socket socket = new Socket(serverAddr, serverInPort); try { Log.d("MainActivity", "C: Sending..."); PrintWriter out = new PrintWriter(new BufferedWriter( new OutputStreamWriter(socket.getOutputStream())), true); out.println(msg); Log.d("MainActivity", "C: Sent."); Toast.makeText(this, msg, duration).show(); } catch (Exception e) { Log.e("MainActivity", "S: Error.", e); } socket.close(); Log.d("MainActivity", "C: Closed."); } catch (Exception e) { Log.e("MainActivity", "C: Error.", e); } }
And this is the Processing testing code (Server):
int port = 1111; Server server; void setup() { size(400, 400); background(0); server = new Server(this, port); } void draw() { // Get the next available client Client thisClient = server.available(); // If the client is not null, and says something, display what it said if (thisClient !=null) { String incomingMsg = thisClient.readString(); if (incomingMsg != null) { print(thisClient.ip() + "," + port + ": " + incomingMsg); } } }
Any help would be appreciate. Many thanks in advance
Fab
Answers
Well, it looks OK to get this exception when the socket is closed. Just catch this exception and all will be fine.
Hi PhiLho, thanks for your answer. I've added some try/catch to my Processing code and I get the NullPointer right where I'm trying to store the received message to incomingMsg string even if there are two if(...) controls which verify that containers are NOT null! Here I post what the draw() function now looks like:
I cannot catch the exception you are saying because the SocketException is not among the valid exceptions that Processing can handle (I've already tried). I think it is somewhere deep in the java code exploited by Processing. Moreover if I catch that SocketException the program always catches exceptions and never does what I want it to do.
Sorry, I accidentally click on "accept answer"... the problem is still alive :) thank you
Processing auto-imports some of the Java API's libraries.
If we've got the need for more, we gotta
importthem by ourselves!
Class SocketException belongs to "java.net" package:
We either use:
import java.net.SocketException;
or type it fully when mentioning the class:
catch (java.net.SocketException cause) {}
Never catch NPE. Indeed, checking if a variable is null before using it is the way to go. println() your variables, to see which one is null. You don't tell us on which line it happens.
And, indeed, Processing tends to catch exceptions itself and to prevent them to bubble up to the sketch, just printing them out, so you are out of luck here.
Why do you want to catch it? Is there a malfunction, or is it just to avoid this error to be displayed?
@GoToLoop: I didn't know that Processing could also import more java libs! So may I write a Processing sketch just like in android with pure java? Of course when I started programming I found more interesting Processing for its simplicity...
@PhiLho: there were a malfunction, java crashed everytime I sent a string from client, but the cause was hidden because it wasn't stored in any variable. The problem is the method to get IP address:
print(thisClient.ip() + ", " + port + ": " + incomingMsg);
But I'm not able to explain why it works if I do not close the socket and it returns a NullPointer if I close it. Seems like it has no time to get the IP before its socket gets closed. Any idea?
For now, thank you both. I simply neglect to print out the client IP, I don't need it at the moment.
PS: the SocketException: Client got end-of-stream still appear in red in the Processing console when I send a string (then when I close a socket) but it doesn't disturb me much...
Client's ip() method:
Field socket is a Socket:
I guess before using any Client method such as that ip(), we should call active() 1st:
|
https://forum.processing.org/two/discussion/7106/always-get-client-got-end-of-stream-error
|
CC-MAIN-2020-45
|
refinedweb
| 860
| 67.45
|
Introduction: Snail Mail Alert
I have wanted to build a device for my Snail Mail USPS Box to alert me when my box was opened. Good for security, but mostly because my box is 150 ft. from the door and I want to avoid unnecessary trips or stops at the box.
I tried to use a hacked Amazon Dash Button as described in several places on the web, but never could get consistent results, probably because of limitations of my wifi router.
Recently I found the Instructables on line Internet of Things Class. On taking the class I realized I could use what I learned to make my Snail Mail Alert device.
It worked! Opening the box, caused the device to send a text message to my smart phone. But there was a serious problem. Battery life was unacceptable. So first thing, I changed the code to read an analog input from a voltage divider and send that as the data in the text message so I could track it.
I went ahead and bought a solar panel and charge controller from Adafruit to complement the Feather Huzzah ESP8266, but I still did not feel that the 3.7v LIPO battery would last long enough through many days of cloudy weather. I included the temp sensor option on the charge controller since it would be in a black mailbox.
Searching the net, I found the idea to use a low power scheme based on the ESP.deepSleep() function. It seemed that the Huzzah used very little current in deep sleep which was fine as I only needed it to run when the box door was opened.
Instead of the normal setup() / loop() structure of the usual Arduino program, all the work is done in setup(), then deepSleep() is called. The loop() is never started and the program sleeps till a LOW on the RST pin triggers a restart that executes setup() again. And repeat every time the RST is pulled LOW by a magnetic reed switch in the mail box. The setup() code connects to wifi, reads the battery voltage and sends the message to the Adafruit IO feed, then sleeps till the next reset.
Good luck if you try to make this, I've no doubt left out some details and assume some level of experience. I'll try to field any questions, comments or suggestions.
Step 1: Materials and Prototyping
- Adafruit Feather Huzzah ESP8266 board in the configuration of your choice:
assembled with regular headers
USB power supply (optional)
3.7v Lipoly battery
USB / DC / Solar Lithium Ion/Polymer charger - v2
Medium 6V 2W Solar panel - 2.0 Watt
3.5 / 1.3mm or 3.8 / 1.1mm to 5.5 / 2.1mm DC Jack Adapter Cable
47K and 10K Resistor (for voltage divider)
1 x 1MΩ resistor (optional)
1 x 1µF capacitor (optional)
- Magnetic Reed Switch
- Plastic project box
I used a push button to simulate the door switch while prototyping.
Step 2: Instalation in the Mailbox
Put it all in the box and done.
Step 3: And It Works!
Code: take the class if you need help with what this is.
Config.h
// visit io.adafruit.com if you need to create an account,
// or if you need your Adafruit IO key.
/************************ Adafruit IO Configuration ***********************/
#define IO_USERNAME "you2adafruit"
#define IO_KEY "alongstringofcharactersthatadafruitwillgiveyou"
/****************** WIFI Configuration***********************/
#define WIFI_SSID "yourroutername"
#define WIFI_PASS "yourpassword"
#include "AdafruitIO_WiFi.h"
AdafruitIO_WiFi io(IO_USERNAME, IO_KEY, WIFI_SSID, WIFI_PASS);
SnailMailLowPower.ino
// Instructables Internet of Things Class sample code
// Circuit Triggers Internet Action
// A reset is triggered and battery voltage stored in a feed
// An LED is used as confirmation feedback
/
/ Modified by Becky Stern 2017
// based on the Adafruit IO Digital Input Example
// Tutorial Link:...
/
/ Adafruit invests time and resources providing this open source code.
// Please support Adafruit and open source hardware by purchasing
// products from Adafruit!
/
/ Written by Todd Treece for Adafruit Industries
// Licensed under the MIT license.
/
/ All text above must be included in any redistribution.
/
/ Adapted by Tom Phillips 2018
// Changed to execute code formerly in loop()
// Sends Battery voltage to Adafruit IO to indicate
// execution then go to deep Sleep.
// On LOW to RST pin, program restarts to send voltage message again.
#include "config.h"
/************************ Main Program Starts Here ************************/
#include "ESP8266WiFi.h"
#include "AdafruitIO.h"
#include "Adafruit_MQTT.h"
#include "ArduinoHttpClient.h"
#define LED_PIN 13
// battery voltage
float battVolts = 0.0;
void setup() {
//set up the 'command' feed
AdafruitIO_Feed *command = io.feed("command");
// start the serial connection
Serial.begin(115200);
while(!Serial);
// flash the LED
pinMode(LED_PIN, OUTPUT);
digitalWrite(LED_PIN, HIGH);
delay(500);
digitalWrite(LED_PIN, LOW);
// connect to io.adafruit.com
Serial.print("Connecting to Adafruit IO ");
io.connect();
// wait for a connection
while(io.status() < AIO_CONNECTED) {
Serial.print(".");
delay(500);
}
// we are connected
Serial.println();
Serial.println(io.statusText());
// io.run(); is required for all sketches.
// it should always be present at the top of your loop
// or in this case, in setup()
// function. it keeps the client connected to
// io.adafruit.com, and processes any incoming data.
io.run();
// borrowed some of the math from:
//...
// get that voltage from the voltage divider
int rawLevel = analogRead(A0);
// convert battery level to percent
int level = map(rawLevel, 500, 609, 0, 100);
// used my ohmmeter to set these values. 9820 for the 10K and 49800 for the parallel 100Ks I used
battVolts = (float)rawLevel / 1000 / (9820. / (49800 + 9820));
// round to 2 decimal places
battVolts += 0.05;
battVolts = float(int(battVolts * 100)) / 100;
// save the battVolts value to the 'command' feed on adafruit io
Serial.print("sending battery volts -> ");
Serial.println(battVolts,2);
command->save(battVolts);
//Go to sleep and stay there till RST goes LOW
ESP.deepSleep(0);
}
void loop(){} // never get here.
Recommendations
We have a be nice policy.
Please be positive and constructive.
2 Comments
Very practical and nice idea!
Excellent idea, and nicely done!
|
http://www.instructables.com/id/Snail-Mail-Alert/
|
CC-MAIN-2018-26
|
refinedweb
| 972
| 66.64
|
February 2007
We don't have the means to download all the information we need directly into our heads. Besides, what fun would that be - half the enjoyment is in the learning. Remember how you felt when you got your first program running? Well, there's a lot more out there waiting to be conquered.
This book has not tried to teach you everything about C# programming. It has rather tried to give you grounding in the basic concepts and enough exposure to real examples for you to now take yourself forward.
The examples we’ve used in this book have always used single file to write the code in. Visual C# Express sometimes splits the code across a few files. We should point out particularly what it does when you start a new “Windows Application” project, so that you know where to start writing your form code.
The screen grab below illustrates what happens in that case – we’ve arranged the windows a bit differently from the default in order to show you clearly what is created for you.
Let’s discuss the four windows above:
Code view of Form1.cs – this holds the file where you will add code that does things on the form. You will, for example, add an event handler for a button here.
Designer view of Form1.Designer.cs – this is where you can drag-and-drop controls such as buttons from the toolbox. That saves you having to hand-code everything.
Code view of the Form1.Designer.cs – when a button, for example, is dropped on the designer surface, Visual C# Express writes some code for you to declare a button instance. That code gets put in this file. So this file is really for the system’s own use – to represent in code the things that you do on the design surface. You will not normally change or add code in this file. In fact, the system draws the designer view out of the information it finds in this file. (The designer view doesn’t have another file – this is its file).
Code view of Program.cs. This file contains the familiar Main() method and some code to automatically create an instance of your Form class. So it’s the file responsible for starting the whole program. In the case of a Windows Application you will not normally change this file.
This splitting of code across files is made possible by something called partial classes – you can have bits of your class in different physical files. Why on earth does the class above have to be split across files? It doesn’t, but there’s a good reason for doing it – to keep code apart that has different basic purposes. Although it may seem complex to have three files, what it’s achieved is giving you a single, clean file, Form1.cs, where you can write all the code that you’re interested in. The system often creates code for you and you’d get quite offended if it dumped that code in the file you’re trying to write in. Similarly, Visual C# Express could get horribly confused if you started messing with the files that it thinks it owns. The “partial” scheme allows everyone to have their own space to work in. Your territory is the top two windows in the view above – the designer view and the code view of Forms1.cs.
Of course, you can always choose to delete Program.cs and Form1.cs and start with your own files instead.
As with any new field that you learn, many questions will arise as you continue. These will often fall within the following areas:
What do I do when I get an error message in Visual C# express?
Calm down and listen to what the computer is trying to tell you. Admittedly, it's sometimes not clear enough, but trying to put yourself in its shoes helps.
Read more about "debugging" in Visual C# Express - it allows you to do a slow-motion step through a program one line at a time, even inspecting the values of your variables, behind the scenes, along the way. This doesn't do the fixing for you, but helps your own brain by exposing more information. The real detective work is up to you.
If it's not making sense, try doing a search for that error message in the help, or on your favorite Internet search engine. Often you will get clues which make you think differently and so grasp the nature of the problem.
In the .NET Framework Class Library ...
How do I know what classes I can find?
How do I know what methods those classes have?
How do I know what parameters to pass to those methods?
Visual C# Express includes a thorough reference to all the classes in the .NET Framework Class Library. Select Help -> Contents to call up the help and then locate the topic ".NET Framework SDK". In the example below, suppose we were looking for information on what classes are available in the System.Windows.Forms namespace. We'd select "Class Library" and then scroll down the right-hand page to "System.Windows.Forms".
When you click the Systems.Windows.Forms hyperlink, a list of classes (in this case things like Button, Label, ComboBox, etc) will be displayed. Once you select a specific class (like ComboBox) you'll be shown something similar to the picture below. Then clicking the "Members" hyperlink will take you to a page which lists all the methods, properties and events available in that class and specifies their details.
What's really great is the huge number of examples that are included in the reference. The cold reference can sometimes seem a bit meaningless, but click "Example" and suddenly things start to fall into place. If you don't find a suitable example by navigating the reference, do a search in the general help and you will often turn up useful examples that are stored in other places.
Where can I find out more?
You'll get to a point where you can't figure out more on your own. Then it's time to hit the community out there, see what other people are up to with C# and look at how they solve various problems with C#.
You would do well to spend some time on websites such as these : or
Don't be afraid to ask some "newby" questions at the discussion forums but make sure you browse through first to see whether someone else has already asked such a question.
Enjoy your learning journey!
Microsoft’s newest programming language, C#, (pronounced “c-sharp”) is both powerful and easy to use. It presents a great opportunity for the new generation of developers to start out with a language that is highly respected in the modern workplace.
This text introduces object-oriented programming to the young developer (core target age is 12-16) in a lightweight fashion, allowing them to get started with real programs in a Windows environment.
Martin Dreyer is an ex-high school teacher who now heads a team of software developers in South Africa.
His formal qualifications are a Higher Diploma in Education : Physical Science and a Bachelor of Science Degree : Computer Science and Information Systems.
|
http://msdn.microsoft.com/en-us/library/bb297399(VS.80).aspx
|
crawl-002
|
refinedweb
| 1,222
| 72.56
|
Question:
I have a Lotus Notes application which actually consists of a template with all the required forms, views and agents needed. It also requires some design elements (a custom form and a view for our own type of documents) from this template to be copied over to the mail template, so after the regular refresh all users have it.
The application works like this: the application database (derived from the template I provide) is created on the Domino server. An agent, running in this database, upon http request, creates a "custom" document in user's mail database.
Then, on the client side, the user can use our view to display this document.
Currently, the deployment procedure goes like this:
- Create a "master" application database out of our template.
- Fill some data, using the forms and views in that database (to configure how the application works)
- Copy the custom form and view to the mail template.
- Create our button (to launch our view and/or form) on the mail template.
- After the nightly database refresh, all users receive the custom form and the view in their mail database, and they can use the button to view our documents (if any).
Now, I want to easy the admin's work, and automate the copying of the custom form and the view, and also the creation of the button to the mail template.
Any idea how I can do this from a NotesScript, JavaScript, Java?
Solution:1
That sounds doable with DXL, and I think you can use both LotusScript and Java to accomplish it.
Something along the lines of this should do it in Java:
public class RenderDesign extends AgentBase { public void NotesMain() { try { Session session = getSession(); AgentContext agentContext = session.getAgentContext(); DxlImporter myimporter = session.createDxlImporter(); try { myimporter.setDesignImportOption(myimporter.DXLIMPORTOPTION_REPLACE_ELSE_CREATE); myimporter.importDxl(this.getDxl(), agentContext.getCurrentDatabase()); } catch (Exception e) { System.out.println(this.getDxl()); System.out.println(myimporter.getLog()); } } catch(Exception e) { e.printStackTrace(); } }
Then just construct a string with the DXL. Use Tools -> DXL Utilities -> Exporter (or Viewer) to inspect the design element you want to add or edit:
public String getDxl(String agentname, String replicaid) { return "<?xml version='1.0' encoding='utf-8'?>"+ "<view name='(auto-view)'> "+/* ... */"</view>"; }
Note that the DXL importer is anything but robust and error-tolerant: You can make the Developer client crash on input that is valid XML and conformant with the DTD. For example, trying to set
fieldhint="" on a field. Keep this in mind while developing.
Solution:2
Try looking at these for ideas --->
To avoid some of the DXL known issues you can try to export & import in encoded binary format.
**Update
After looking at your situation a bit more closely, I think the easiest route would be to use template inheritance. So you would copy the elements from your custom template into the Mail template and make sure the elements are setup to inherit from your custom template.
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon
|
http://www.toontricks.com/2018/06/tutorial-lotus-notes-scripting-creation.html
|
CC-MAIN-2018-34
|
refinedweb
| 508
| 53
|
doctest
Test interactive Haskell examples
See all snapshots
doctest appears in
Doctest: Test interactive Haskell examples
doctest is a small program, that checks examples in Haddock comments. It is similar
to the popular Python module with the same name.
Installation
doctest is available from
Hackage.
Install it, by typing:
cabal install doctest
Make sure that Cabal’s
bindir is on your
PATH.
On Linux:
export PATH="$HOME/.cabal/bin:$PATH"
On Mac OS X:
export PATH="$HOME/Library/Haskell/bin:$PATH"
On Windows:
set PATH="%AppData%\cabal\bin\;%PATH%"
For more information, see the section on paths in the Cabal User Guide.)
(A comment line starting with
>>> denotes an expression.
All comment lines following an expression denote the result of that expression.
Result is defined by what an
REPL (e.g. ghci)
prints to
stdout and
stderr when evaluating that expression.)
With
doctest you may check whether the implementation satisfies the given examples, by typing:
doctest Fib.hs
You may produce Haddock documentation for that module with:
haddock -h Fib.hs -o doc/
Example groups
Examples from a single Haddock comment are grouped together and share the same scope. E.g. the following works:
-- | -- >>> let x = 23 -- >>> x + 42 -- 65
If an example fails, subsequent examples from the same group are skipped. E.g. for
-- | -- >>> let x = 23 -- >>> let n = x + y -- >>> print n
print n is not tried, because
let n = x + y fails (
y is not in scope!).
A note on performance
By default,
doctest calls
:reload between each group to clear GHCi’s scope
of any local definitions. This ensures that previous examples cannot influence
later ones. However, it can lead to performance penalties if you are using
doctest in a project with many modules. One possible remedy is to pass the
--fast flag to
doctest, which disables calling
:reload between groups.
If
doctests are running too slowly, you might consider using
--fast.
(With the caveat that the order in which groups appear now matters!)
However, note that due to a
bug on GHC 8.2.1 or later,
the performance of
--fast suffers significantly when combined with the
--preserve-it flag (which keeps the value of GHCi’s
it value between
examples).
Setup code
You can put setup code in a named chunk with the name
$setup.
The setup code is run before each example group. If the setup code produces
any errors/failures, all tests from that module are skipped.
Here is an example:
module Foo where import Bar.Baz -- $setup -- >>> let x = 23 :: Int -- | -- >>> foo + x -- 65 foo :: Int foo = 42
Note that you should not place setup code inbetween the module header (
module ... where) and import declarations. GHC will not be able to parse it (issue
#167). It is best to place setup
code right after import declarations, but due to its declarative nature you can
place it anywhere inbetween top level declarations as well.
Multi-line input
GHCi supports commands which span multiple lines, and the same syntax works for doctest:
-- | -- >>> :{ -- let -- x = 1 -- y = 2 -- in x + y + multiline -- :} -- 6 multiline = 3
Note that
>>> can be left off for the lines following the first: this is so that
haddock does not strip leading whitespace. The expected output has whitespace
stripped relative to the :}.
Some peculiarities on the ghci side mean that whitespace at the very start is lost.
This breaks the example
broken, since the the x and y are aligned from ghci’s
perspective. A workaround is to avoid leading space, or add a newline such
that the indentation does not matter:
{- | >>> :{ let x = 1 y = 2 in x + y + works :} 6 -} works = 3 {- | >>> :{ let x = 1 y = 2 in x + y + broken :} 3 -} broken = 3
Multi-line output
If there are no blank lines in the output, multiple lines are handled automatically.
-- | >>> putStr "Hello\nWorld!" -- Hello -- World!
If however the output contains blank lines, they must be noted
explicitly with
<BLANKLINE>. For example,
import Data.List ( intercalate ) -- | Double-space a paragraph. -- -- Examples: -- -- >>> let s1 = "\"Every one of whom?\"" -- >>> let s2 = "\"Every one of whom do you think?\"" -- >>> let s3 = "\"I haven't any idea.\"" -- >>> let paragraph = unlines [s1,s2,s3] -- >>> putStrLn $ doubleSpace paragraph -- "Every one of whom?" -- <BLANKLINE> -- "Every one of whom do you think?" -- <BLANKLINE> -- "I haven't any idea." -- doubleSpace :: String -> String doubleSpace = (intercalate "\n\n") . lines
Matching arbitrary output
Any lines containing only three dots (
...) will match one or more lines with
arbitrary content. For instance,
-- | -- >>> putStrLn "foo\nbar\nbaz" -- foo -- ... -- baz
If a line contains three dots and additional content, the three dots will match anything within that line:
-- | -- >>> putStrLn "foo bar baz" -- foo ... baz
QuickCheck properties
Haddock (since version 2.13.0) has markup support for properties. Doctest can verify properties with QuickCheck. A simple property looks like this:
-- | -- prop> \xs -> sort xs == (sort . sort) (xs :: [Int])
The lambda abstraction is optional and can be omitted:
-- | -- prop> sort xs == (sort . sort) (xs :: [Int])
A complete example that uses setup code is below:
module Fib where -- $setup -- >>> import Control.Applicative -- >>> import Test.QuickCheck -- >>> newtype Small = Small Int deriving Show -- >>> instance Arbitrary Small where arbitrary = Small . (`mod` 10) <$> arbitrary -- | Compute Fibonacci numbers -- -- The following property holds: -- -- prop> \(Small n) -> fib n == fib (n + 2) - fib (n + 1) fib :: Int -> Int fib 0 = 0 fib 1 = 1 fib n = fib (n - 1) + fib (n - 2)
If you see an error like the following, ensure that
QuickCheck is a dependency
of the test-suite or executable running
doctest.
<interactive>:39:3: Not in scope: ‘polyQuickCheck’ In the splice: $(polyQuickCheck (mkName "doctest_prop")) <interactive>:39:3: GHC stage restriction: ‘polyQuickCheck’ is used in a top-level splice or annotation, and must be imported, not defined locally In the expression: polyQuickCheck (mkName "doctest_prop") In the splice: $(polyQuickCheck (mkName "doctest_prop"))
Hiding examples from Haddock
You can put examples into [named chunks] named-chunks, and not refer to them in the export list. That way they will not be part of the generated Haddock documentation, but Doctest will still find them.
-- $ -- >>> 1 + 1 -- 2
Using GHC extensions
There’s two sets of GHC extensions involved when running Doctest:
- The set of GHC extensions that are active when compiling the module code (excluding the doctest examples). The easiest way to specify these extensions is through [LANGUAGE pragmas] language-pragma in your source files. (Doctest will not look at your cabal file.)
- The set of GHC extensions that are active when executing the Doctest examples. (These are not influenced by the LANGUAGE pragmas in the file.) The recommended way to enable extensions for Doctest examples is to switch them on like this:
-- | -- >>> :set -XTupleSections -- >>> fst' $ (1,) 2 -- 1 fst' :: (a, b) -> a fst' = fst
Alternatively you can pass any GHC options to Doctest, e.g.:
doctest -XCPP Foo.hs
These options will affect both the loading of the module and the execution of the Doctest examples.
If you want to omit the information which language extensions are enabled from the Doctest examples you can use the method described in [Hiding examples from Haddock] (#hiding-examples-from-haddock), e.g.:
-- $ -- >>> :set -XTupleSections
Cabal integration
Doctest provides both, an executable and a library. The library exposes a
function
doctest of type:
doctest :: [String] -> IO ()
Doctest’s own
main is simply:
main = getArgs >>= doctest
Consequently, it is possible to create a custom executable for a project, by
passing all command-line arguments that are required for that project to
doctest. A simple example looks like this:
-- file doctests.hs import Test.DocTest main = doctest ["-isrc", "src/Main.hs"]
And a corresponding Cabal test suite section like this:
test-suite doctests type: exitcode-stdio-1.0 ghc-options: -threaded main-is: doctests.hs build-depends: base, doctest >= 0.8
Doctest in the wild
You can find real world examples of
Doctest being used below:
Doctest extensions
Development
Join in at
#hspec on freenode.
Discuss your ideas first, ideally by opening an issue on GitHub.
Add tests for new features, and make sure that the test suite passes with your changes.
cabal configure --enable-tests && cabal build && cabal exec cabal test
Contributors
- Adam Vogt
- Anders Persson
- Ankit Ahuja
- Edward Kmett
- Hiroki Hattori
- Joachim Breitner
- João Cristóvão
- Julian Arni
- Kazu Yamamoto
- Levent Erkok
- Luke Murphy
- Matvey Aksenov
- Michael Orlitzky
- Michael Snoyman
- Nick Smallbone
- Sakari Jokinen
- Simon Hengel
- Sönke Hahn
Changes
- Add `--verbose` for printing each test as it is run
Changes in 0.14.1
- Add test assets to source tarball (see #189)
Changes in 0.14.0
- GHC 8.4 compatibility.
Changes in 0.13.0
- Add `--preserve-it` for allowing the `it` variable to be preserved between examples
Changes in 0.12.0
- Preserve the 'it' variable between examples
Changes in 0.11.4
- Add `--fast`, which disables running `:reload` between example groups
Changes in 0.11.3
- Add `--info`
- Add `--no-magic`
Changes in 0.11.2
- Make `...` match zero lines
Changes in 0.11.1
- Fix an issue with Unicode output on Windows (see #149)
Changes in 0.11.0
- Support for GHC 8.0.1-rc2
Changes in 0.10.1
- Automatically expand directories into contained Haskell source files (thanks @snoyberg)
- Add cabal_macros.h and autogen dir by default (thanks @snoyberg)
Changes in 0.10.0
- Support HASKELL_PACKAGE_SANDBOXES (thanks @snoyberg)
Changes in 0.9.13
- Add ellipsis as wildcard
Changes in 0.9.12
- Add support for GHC 7.10
Changes in 0.9.11
- Defaults ambiguous type variables to Integer (#74)
Changes in 0.9.10
- Add support for the upcoming GHC 7.8 release
Changes in 0.9.9
- Add support for multi-line statements
Changes in 0.9.8
- Support for GHC HEAD (7.7)
Changes in 0.9.7
- Ignore trailing whitespace when matching example output
Changes in 0.9.6
- Fail gracefully if GHCi is not supported (#46)
Changes in 0.9.5
- Fix a GHC panic with GHC 7.6.1 (#41)
Changes in 0.9.4
- Respect HASKELL_PACKAGE_SANDBOX (#39)
- Print path to ghc on --version
Changes in 0.9.3
- Properly handle additional object files (#38)
Changes in 0.9.2
- Add support for QuickCheck properties
Changes in 0.9.1
- Fix an issue with GHC 7.6.1 and type families
Changes in 0.9.0
- Add support for setup code (see README).
- There is no distinction between example/interaction anymore. Each
expression is counted as an example in the summary.
Changes in 0.8.0
- Doctest now directly accepts arbitrary GHC options, prefixing GHC options
with --optghc is no longer necessary
Changes in 0.7.0
- Print source location for failing tests
- Output less clutter on failing examples
- Expose Doctest's functionality through a very simplistic API, which can be
used for cabal integration
Changes in 0.6.1
- Fix a parser bug with CR+LF line endings
Changes in 0.6.0
- Support for ghc-7.4
- Doctest now comes with it's own parser and does not depend on Haddock
anymore
Changes in 0.5.2
- Proper handling of singular/plural when printing stats
- Improve handling of invalid command line options
Changes in 0.5.1
- Adapted for ghc-7.2
Changes in 0.5.0
- Print number of interactions to stderr before running tests
- Exit with exitFailure on failed tests
- Improve documentation
- Give a useful error message if ghc is not executable
|
https://www.stackage.org/lts-3.5/package/doctest-0.10.1
|
CC-MAIN-2018-34
|
refinedweb
| 1,878
| 66.23
|
Opened 10 months ago
Closed 10 months ago
#21067 closed Cleanup/optimization (invalid)
is_staff shouldn't be checked in admin templates
Description
In all the templates under django.contrib.admin, the only place where user.is_staff is used is base.html
/django/django/contrib/admin/templates$ grep -r is_staff admin/base.html: {% if user.is_active and user.is_staff %}
This block wraps
<div id="user-tools"> {% trans 'Welcome,' %} <strong>{% firstof user.get_short_name user.get_username %}</strong>. {% block userlinks %} {% url 'django-admindocs-docroot' as docsroot %} {% if docsroot %} <a href="{{ docsroot }}">{% trans 'Documentation' %}</a> / {% endif %} {% if user.has_usable_password %} <a href="{% url 'admin:password_change' %}">{% trans 'Change password' %}</a> / {% endif %} <a href="{% url 'admin:logout' %}">{% trans 'Log out' %}</a> {% endblock %} </div>
It's my impression that the condition user.is_staff should be removed from the "if" clause.
There is already Python code in charge of checking user has proper permissions like admin.views.decorators.staff_member_required and admin.sites.AdminSite.has_permission, so the mentioned condition is redundant.
I happened to discover this when building a custom admin site that doesn't require the user to be staff. The current admin implementation prevents an "elegant" customization/extension and forces the developer to replace the template entirely. If not, the user could still log in and use the admin site but wouldn't see the links to change password and log out, because they are wrapped by the aforementioned "if" clause.
PS: have mercy on me, it's my first patch and "real" bug report :)
Attachments (0)
Change History (5)
comment:1 Changed 10 months ago by glarrain
- Has patch set
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 10 months ago by mjtamlyn
- Triage Stage changed from Unreviewed to Ready for checkin
- Type changed from Bug to Cleanup/optimization
Change looks reasonable, I think that test is redundant and it is unhelpful in the context of custom user models, or in general!
I'm not sure that we will need a note in the release notes, I don't think anyone should be relying on this.
comment:3 Changed 10 months ago by timo
I don't see a use for the is_active check either as inactive users can't login. Thoughts?
comment:4 Changed 10 months ago by Tim Graham <timograham@…>
comment:5 Changed 10 months ago by timo
- Resolution set to invalid
- Status changed from new to closed
Actually, it appears that the way the templates are currently structured both checks are required. The checks prevent that block from appearing on the login form if a user has either flag removed while he's already logged in. I've added a test which would fail if is_staff is removed. There's already a test for is_active. There may be a better way to structure the templates so we can handle the situation you have in mind. If you are interested in pursuing that, please open a separate ticket, thanks!
Created pull request
Test suite results haven't changed
|
https://code.djangoproject.com/ticket/21067
|
CC-MAIN-2014-23
|
refinedweb
| 497
| 62.48
|
JSF 2.0 and rendered attribute improvementsArbi Sookazian Sep 11, 2009 7:18 PM
This is a long and interesting wiki written up by DAllen that I just stumbled upon (how come you guys write these helpful articles and never seem to announce them in the forum - I found this months later????) It's related to Seam because if you're using JSF 1.x with Seam, then you may be experiencing some performance degradation as a direct result of the getter methods (associated with the rendered attribute) in your backing beans being executed too many times during the JSF life cycle.
Here is the enhancement in JSF issues list: due to be fixed in JSF 2.0.
1. Re: JSF 2.0 and rendered attribute improvementsJonathan Fields Apr 9, 2010 10:23 PM (in response to Arbi Sookazian)
I definitely agree that this needs to be fixed. I just spent about half a day trying to understand why a DB query was being executed TWICE. Using the rendered attribute was the culprit. I have a very simple CRUD search page using Seam EntityQuery. The rendered expressions which call EntityQuery.getResultList() (to see if it is empty, or not) are being evaluated in Apply Request Values, which fires the DB query. Then, during Invoke Application, EntityQuery.first() is being called, as a result of a button click, which refreshes the query. Finally in Render Response, EntityQuery.getResultList() is called again, and since first() was just called, fires the DB query a second time.
One workaround I can see is using c:if instead of rendered. I don't like that, and am not even sure if it's completely
safeto do.... The other possibility is to eliminate the use of dataTable, since one of the rendered attributes is determining whether there are any rows to display.
I've read some fairly arrogant posts on other forums about how there's no problem with rendered being evaluated so much. They say
bad application designand all that.
To me, it's not so much the frequency, but when its called. Being called before Invoke Application is the issue to me, and doesn't make sense, since what's truly going to be
renderedwill likely depend upon the outcome of whats done in Invoke Application. (For example, an action in Invoke Application might update the DB which changes the values of the rendered expressions from Apply Request Values to Render Response.
2. Re: JSF 2.0 and rendered attribute improvementsHui Onn Tan Oct 7, 2010 11:39 AM (in response to Arbi Sookazian)
I encounter the exactly same problem today. To prevent calling EntityQuery.getResultList() before INVOKE APPLICATION phase, the workaround I use is to exclude datatable from server side processing with <a4j:region>
However, this is not a good general solution. I wonder whether there is recommended way of providing consistent rendered expression in JSF 1.2. The expression should be re-evaluated during INVOKE APPLICATION phase.
3. Re: JSF 2.0 and rendered attribute improvementsTim Evers Oct 8, 2010 1:38 AM (in response to Arbi Sookazian)
Some of the reasons why it is evaluated multiple times were fairly valid reasons. Not necessarily good design but they at least had reasons.
I was going to write a bit more but, I'm really no where near as experienced as a lot of people that have written fairly good explanations out there on the web already.
JSF 2.0 will make a difference yes, but it's not going to be the silver bullet. If you are writing things where a getter goes back to the database after the original list call has been made or anything like that then your code is wrong. It can get out of sync and ultimately we should be writing code that goes off to the database as little as possible. Many small transactions is going to take a lot longer then grabbing your data up front.
I say all this but I know there are special circumstances where this may not be applicable.
Tan, your solution is much more complicated then you need to make it.
Just don't put an EL expression in your JSF that causes that method to be called. A getter should not be the method to do the query. Your buttons (or events if you are using fancy ajaxy stuff :P) should fire off methods that do the query, or your initialisation into the page should do the query, not a getter. Keep your getters simple, it will be worth it later.
4. Re: JSF 2.0 and rendered attribute improvementsHui Onn Tan Oct 20, 2010 12:06 PM (in response to Arbi Sookazian)
Thanks for your advice. I fully agree with your opinion.
However that is seam-gen design. I just customize my application from seam-gen template.
I thought it was troublesome to change all ajax calls (sort, paging, search) to invoke the method that do database query, but it appears that it is easier than I think.
it will be worth it later
So true... Now, I can pre-process and post-process the result list easily (for multi-page selection).
5. Re: JSF 2.0 and rendered attribute improvementsRoger Mori Nov 5, 2010 1:20 PM (in response to Arbi Sookazian)
Following this article's advice, I was trying to to count how many times the DB is efectively being hit by a query.
Based on a view (list) generated by seam-gen, I found that the getResultList method is being called several times, but the query itself is performed only once per JSF request.
Following the EntityQuery's source code, I placed a counter in an overriden version the method createQuery, which is the only stop in the chaing just before performing the query.
How can I replicate the above behavior?
Please advise
6. Re: JSF 2.0 and rendered attribute improvementsTim Evers Nov 7, 2010 6:14 PM (in response to Arbi Sookazian)
Well, just look at where the createQuery method get's called from and you'll see how they do this.
The way this is done is to check if the resultList or singleResult variables are null. If the are null then they query is created and executed. If they are not null then it just uses the value it has.
So essentially, you can write all your getters to only query the DB if the result var is null. This is an OK solution but still not the best.
First. You'll still execute the query many times if you are doing a singleResult query where it returns no values because the result will always be null. Thus the query will execute many times.
Second. You have to make sure you null out these vars when you want the query to re execute. This doesn't sound bad, but it makes your code hard to read and follow. Because instead of making a call to a method like refreshResultsFromDB() you'll just go results = null; I mean you could put the results = null; line in a nicely named method but then you still are left with the problem of when exactly will the query get executed. What phase? Do I need the results before that? etc.
Following on from my second point, I just want to say the reason why I like to keep my getters and setters simple. When writing code try to remember that you may not always be the one maintaining it. Ideally when someone else comes along and takes your place as the maintenance programmer they should be able to follow your code logically and work out how things happen and when. If getters contain logic then there is no way to follow the execution path of your app as you can never really be sure (without running the app) exactly what order things are happening. If you follow a nice pattern of an init() method when navigating to a page. a reset() method (if applicable) for a reset button. saveButtonClicked() method or appropriate action methods for other command buttons. And ensure that all db queries and logic are in these action or actionListener methods. This will make other developers love you when they have to come in and make changes :)
7. Re: JSF 2.0 and rendered attribute improvementsRoger Mori Nov 9, 2010 6:25 PM (in response to Arbi Sookazian)
Tim:
Thank you for your feedback.
I have finished extending EntityQuery to trigger the query only if a parameter has been to true. As a result, the query hits the database only once per JSF request.
package com.dpn.action.sf; import java.util.List; import org.jboss.seam.framework.EntityQuery; public class HerlindaQuery<E> extends EntityQuery<E> { private Boolean openFire; @Override public List<E> getResultList() { if (getOpenFire()) { return super.getResultList(); } else { return null; } } @Override public E getSingleResult() { if (getOpenFire()) { return super.getSingleResult(); } else { return null; } } @Override public Long getResultCount() { if (getOpenFire()) { return super.getResultCount(); } else { return null; } } public Boolean getOpenFire() { return openFire == null ? false : openFire; } public void setOpenFire(Boolean openFire) { this.openFire = openFire; } }
8. Re: JSF 2.0 and rendered attribute improvementsMartin Frey Nov 11, 2010 11:45 AM (in response to Arbi Sookazian)
Sorry to crosspost but i think what i did today for my issues with the rendered attribute seems like it applies for this here too:
|
https://developer.jboss.org/thread/189661
|
CC-MAIN-2018-39
|
refinedweb
| 1,565
| 72.05
|
Get end-to-end visibility into your Django performance with application monitoring tools. Gain insightful metrics on performance bottlenecks with Python monitoring to optimize your application. the agent using pip:
pip install atatus[flask]
2. Initialize Atatus agent and add license key, app name in your main file.
from atatus.contrib.flask import Atatus
app = Flask(__name__)
# Add atatus agent to your app.
app.config['ATATUS'] = {
"APP_NAME": "Flask App",
"LICENSE_KEY": "lic_apm_xxxxxxx"
}
atatus = Atatus(app)
3. Restart your server
1. Go to your app directory and set your license key and app name to heroku config
heroku config:set ATATUS_APP_NAME="Flask App"
heroku config:set ATATUS_LICENSE_KEY="lic_apm_xxxxxx"
2. Add atatus.contrib to INSTALLED_APPS and set license key, app name in your settings.py.
INSTALLED_APPS = [
# ...
'atatus.contrib.django',
]
3. Add atatus to your project’s requirements.txt file.
# requirements.txt
atatus
4. Create a Procfile in your root directory and add the following line.
web: gunicorn yoursite.wsgi
5. Run the following commands to commit the changes.
git add .
git commit -m "Added Atatus Agent"
git push heroku master
heroku logs --tail
6. Access your app.
Atatus captures all requests to your Django Django app.Learn more
View the complete picture of the most time consuming Django database queries and focus on slow database queries along with traces that provide the actionable insights.Learn more
Visualize where your code is spending most of its time in your Django app, which functions were executed, for how long. Get the overview along with the breakdown of related database, and network calls.Learn more
See detailed overview of all the HTTP Failures that are impacting your users. Find the status codes breakdown and along with Django request parameters, find the root cause of the API failures.Learn more
Automatically visualize end-to-end business transactions in your Django application. Monitor the amount and type of failed HTTP status codes and application crashes with Django Monitoring. Analyze response time to identify Django performance issues and Django errors on each and every business transaction. Understand the impact of methods and database calls that affects your customer's experience.Learn more
Examine all SQL and NoSQL queries used by your Django Django error is tracked using error tracking and captured with full stacktrace and exact line of source code is highlighted to make bug fixing easier. Get all the essential data such as class, message, URL, request agent, version etc to fix the Django exceptions and errors. Identify buggy API or third party services by investigating API failure rates and application crashes. Get alerts for application errors and exceptions via Email, Slack, PagerDuty, or using webhooks.Learn more
Quickly view highest Django HTTP failures and get each request information along with custom data to identify the root cause of the failures. See the breakdown of the API failures based on HTTP Status Codes and the end-users having the highest impact.Learn more
Break down slow Django.
|
https://www.atatus.com/for/django
|
CC-MAIN-2022-05
|
refinedweb
| 489
| 57.47
|
Node:Header files, Next:Kinds of library, Previous:Libraries, Up:Libraries
Header files
As mentioned above, libraries have header files that define information to be used in conjunction with the libraries, such as functions and data types. When you include a header file, the compiler adds the functions, data types, and other information in the header file to the list of reserved words and commands in the language. After that, you cannot use the names of functions or macros in the header file to mean anything other than what the library specifies, in any source code file that includes the header file.
The most commonly used header file is for the standard input/output
routines in
glibc and is called
stdio.h. This and other
header files are included with the
#include command at the top of
a source code file. For example,
#include "name.h"
includes a header file from the current directory (the directory in
which your C source code file appears), and
#include <name.h>
includes a file from a system directory -- a standard GNU
directory like
/usr/include. (The
#include command is
actually a preprocessor directive, or instruction to a program
used by the C compiler to simplify C code. (See Preprocessor directives, for more information.)
Here is an example that uses the
#include directive to include
the standard
stdio.h header in order to print a greeting on the
screen with the
printf command. (The characters
\n cause
printf to move the cursor to the next line.)
#include <stdio.h> int main () { printf ("C standard I/O file is included.\n"); printf ("Hello world!\n"); return 0; }
If you save this code in a file called
hello.c, you
can compile this program with the following command:
gcc -o hello hello.c
As mentioned earlier, you can use some library functions without having
to link library files explicitly, since every program is always linked
with the standard C library. This is called
libc on older
operating systems such as Unix, but
glibc ("GNU libc") on
GNU systems. The
glibc file includes standard functions for
input/output, date and time calculation, string manipulation, memory
allocation, mathematics, and other language features.
Most of the standard
glibc functions can be incorporated into
your program just by using the
#include directive to include the
proper header files. For example, since
glibc includes the
standard input/output routines, all you need to do to be able to call
printf is put the line
#include <stdio.h> at the beginning
of your program, as in the example that follows.
Note that
stdio.h is just one of the many header files you will
eventually use to access
glibc. The GNU C library is
automatically linked with every C program, but you will eventually need
a variety of header files to access it. These header files are not
included in your code automatically -- you must include them yourself!
#include <stdio.h> #include <math.h> int main () { double x, y; y = sin (x); printf ("Math library ready\n"); return 0; }
However, programs that use a special function outside of
glibc
-- including mathematical functions that are nominally part of
glibc, such as function
sin in the example above! -- must
use the
-l option to
gcc in order to link the
appropriate libraries. If you saved this code above in a file called
math.c, you could compile it with the following command:
gcc -o math math.c -lm
The option
-lm links in the library
libm.so, which is
where the mathematics routines are actually located on a GNU system.
To learn which header files you must include in your program in order to
use the features of
glibc that interest you, consult Table of Contents. This document
lists all the functions, data types, and so on contained in
glibc, arranged by topic and header file. (See Common library functions, for a partial list of these header files.)
Note: Strictly speaking, you need not always use a system header file to access the functions in a library. It is possible to write your own declarations that mimic the ones in the standard header files. You might want to do this if the standard header files are too large, for example. In practice, however, this rarely happens, and this technique is better left to advanced C programmers; using the header files that came with your GNU system is a more reliable way to access libraries.
|
http://crasseux.com/books/ctutorial/Header-files.html
|
CC-MAIN-2017-43
|
refinedweb
| 740
| 62.27
|
Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
Learn how to classify images with TensorFlow
Learn how to classify images with TensorFlow
Create a simple, yet powerful neural network to classify images using the open source TensorFlow software library.
opensource.com
Get the newsletter
Recent advancements in deep learning algorithms and hardware performance have enabled researchers and companies to make giant strides in areas such as image recognition, speech recognition, recommendation engines, and machine translation.. The computation steps are embarrassingly parallel and can be deployed to perform frame-by-frame video analysis and extended for temporal-aware video analysis.This series cuts directly to the most compelling material. A basic understanding of the command line and Python is all you need to play along from home. It aims to get you started quickly and inspire you to create your own amazing projects. I won't dive into the depths of how TensorFlow works, but I'll provide plenty of additional references if you're hungry for more. All the libraries and tools in this series are free/libre/open source software.
How it works
Our goal in this tutorial is to take a novel image that falls into a category we've trained and run it through a command that will tell us in which category the image fits. We'll follow these steps:
- Labeling is the process of curating training data. For flowers, images of daisies are dragged into the "daisies" folder, roses into the "roses" folder, and so on, for as many different flowers as desired. If we never label ferns, the classifier will never return "ferns." This requires many examples of each type, so it is an important and time-consuming process. (We will use pre-labeled data to start, which will make this much quicker.)
- Training is when we feed the labeled data (images) to the model. A tool will grab a random batch of images, use the model to guess what type of flower is in each, test the accuracy of the guesses, and repeat until most of the training data is used. The last batch of unused images is used to calculate the accuracy of the trained model.
- Classification is using the model on novel images. For example, input:
IMG207.JPG, output:
daisies. This is the fastest and easiest step and is cheap to scale.
Training and classification
In this tutorial, we'll train an image classifier to recognize different types of flowers. Deep learning requires a lot of training data, so we'll need lots of sorted flower images. Thankfully, another kind soul has done an awesome job of collecting and sorting images, so we'll use this sorted data set with a clever script that will take an existing, fully trained image classification model and retrain the last layers of the model to do just what we want. This technique is called transfer learning.
The model we're retraining is called Inception v3, originally specified in the December 2015 paper "Rethinking the Inception Architecture for Computer Vision."
Inception doesn't know how to tell a tulip from a daisy until we do this training, which takes about 20 minutes. This is the "learning" part of deep learning.
Installation
Step one to machine sentience: Install Docker on your platform of choice.
The first and only dependency is Docker. This is the case in many TensorFlow tutorials (which should indicate this is a reasonable way to start). I also prefer this method of installing TensorFlow because it keeps your host (laptop or desktop) clean by not installing a bunch of dependencies.
Bootstrap TensorFlow
With Docker installed, we're ready to fire up a TensorFlow container for training and classification. Create a working directory somewhere on your hard drive with 2 gigabytes of free space. Create a subdirectory called
local and note the full path to that directory.
docker run -v /path/to/local:/notebooks/local --rm -it --name tensorflow
tensorflow/tensorflow:nightly /bin/bash
Here's a breakdown of that command.
-v /path/to/local:/notebooks/localmounts the
localdirectory you just created to a convenient place in the container. If using RHEL, Fedora, or another SELinux-enabled system, append
:Zto this to allow the container to access the directory.
--rmtells Docker to delete the container when we're done.
-itattaches our input and output to make the container interactive.
--name tensorflowgives our container the name
tensorflowinstead of
sneaky_chowderheador whatever random name Docker might pick for us.
tensorflow/tensorflow:nightlysays run the
nightlyimage of
tensorflow/tensorflowfrom Docker Hub (a public image repository) instead of latest (by default, the most recently built/available image). We are using nightly instead of latest because (at the time of writing) latest contains a bug that breaks TensorBoard, a data visualization tool we'll find handy later.
/bin/bashsays don't run the default command; run a Bash shell instead.
Train the model
Inside the container, run these commands to download and sanity check the training data.
curl -O
echo 'db6b71d5d3afff90302ee17fd1fefc11d57f243f flower_photos.tgz' | sha1sum -c
If you don't see the message
flower_photos.tgz: OK, you don't have the correct file. If the above
curl or
sha1sum steps fail, manually download and explode the training data tarball (
SHA-1 checksum: db6b71d5d3afff90302ee17fd1fefc11d57f243f) in the
local directory on your host.
Now put the training data in place, then download and sanity check the retraining script.
mv flower_photos.tgz local/
cd local
curl -O
echo 'a74361beb4f763dc2d0101cfe87b672ceae6e2f5 retrain.py' | sha1sum -c
Look for confirmation that
retrain.py has the correct contents. You should see
retrain.py: OK.
Finally, it's time to learn! Run the retraining script.
python retrain.py --image_dir flower_photos --output_graph output_graph.pb --output_labels output_labels.txt
If you encounter this error, ignore it:
TypeError: not all arguments converted during string formatting Logged from file.
tf_logging.py, line 82
As
retrain.py proceeds, the training images are automatically separated into batches of training, test, and validation data sets.
In the output, we're hoping for high "Train accuracy" and "Validation accuracy" and low "Cross entropy." See How to retrain Inception's final layer for new categories for a detailed explanation of these terms. Expect training to take around 30 minutes on modern hardware.
Pay attention to the last line of output in your console:
INFO:tensorflow:Final test accuracy = 89.1% (N=340)
This says we've got a model that will, nine times out of 10, correctly guess which one of five possible flower types is shown in a given image. Your accuracy will likely differ because of randomness injected into the training process.
Classify
With one more small script, we can feed new flower images to the model and it'll output its guesses. This is image classification.
Save the following as
classify.py in the
local directory on your host:
import tensorflow as tf, sys
image_path = sys.argv[1]
graph_path = 'output_graph.pb'
labels_path = 'output_labels.txt'
# Read in the image_data
image_data = tf.gfile.FastGFile(image_path, 'rb').read()
# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line
in tf.gfile.GFile(labels_path)]
# Unpersists graph from file
with tf.gfile.FastGFile(graph_path, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
# Feed the image_data as input to the graph and get first prediction
with tf.Session() as sess:))
To test your own image, save it as
test.jpg in your
local directory and run (in the container)
python classify.py test.jpg. The output will look something like this:
sunflowers (score = 0.78311)
daisy (score = 0.20722)
dandelion (score = 0.00605)
tulips (score = 0.00289)
roses (score = 0.00073)
The numbers indicate confidence. The model is 78.311% sure the flower in the image is a sunflower. A higher score indicates a more likely match. Note that there can be only one match. Multi-label classification requires a different approach.
For more detail, view this great line-by-line explanation of
classify.py.
The graph loading code in the classifier script was broken, so I applied the
graph_def = tf.GraphDef(), etc. graph loading code.
With zero rocket science and a handful of code, we've created a decent flower image classifier that can process about five images per second on an off-the-shelf laptop computer.
In the second part of this series, which publishes next week, we'll use this information to train a different image classifier, then take a look under the hood with TensorBoard. If you want to try out TensorBoard, keep this container running by making sure
docker run isn't terminated.
15 Comments
I tried to get the retrain.py script, but got a "No such file or directory" message.
Also, the term "labeling" seems misused. This seems to be a categorization of images by assigning them to a particular directory, but the images are not labeled. When I use Shotwell and attach a tag to an image, that's labeling.
Hi Greg, thanks for reading my article. Did you run the "curl" command inside the container?
Regarding labeling, think of the directory itself as a "tag" or "label". Same concept.
I copied the command straight from the page. The other command worked Ok.
Did you run it in the container?
I don't know what that means or why it should make a difference. The other curl command ran without problems. AFAIK, curl should work by itself, with or without any container.
I figured out the problem. There is a line break in the middle of the curl command for getting retrain.py, which means you can't copy it, then paste it on the command line and have it work properly. I had to paste it into a text editor, then join the lines into one, then copy that.
Hi Adam,
Thank you for the article... I am a little bit confused with the following command
docker run -v /path/to/local:/notebooks/local --rm -it --name tensorflow
Does it means that "/path/to/local:/notebooks/local" is the path to my "local" folder on the local machine? When I run the command it pulls the tensorflow image and locates me under temporal name (e.g. root@b4d5c077eecc:/notebooks#) and if I try to move archive with flower images to "local folder" via
mv flower_photos.tgz local/
it tells that there is no such a directory ....
What might be the problem?..
Thanks in advance...
Hi Oleksiy, what we're trying to do with this "-v" argument is map a directory on the host to a directory inside the container. "/path/to/local" is what you should change to be the actual absolute path to a/the directory called "local" on your host. "/notebooks" is where you end up when you exec into the container, and "/notebooks/local" will be available in the container and on your host simultaneously. Does that help?
Hi Oleksiy, you're welcome! Thanks for reading.
The -v argument is used to specify a Docker volume mount. There are two parts, separated by a colon. HOST_PATH:CONTAINER_PATH.
Change "/path/to/local" to the absolute path to the "local" directory you created on your host. If your current working directory is "/home/user" and you created a "local" dir in there, your docker run command line would include "-v /home/user/local:/notebooks/local".
Note that you must also download the flower_photos.tgz tarball from inside the container. See this step in "Train the model".
I hit an error at :
curl -O...
6d2793cb67207a085d0/tensorflow/examples/image_retraining/retrain.py
echo 'a74361beb4f763dc2d0101cfe87b672ceae6e2f5 retrain.py' | sha1sum -c
...and i typed this in manually (did not miss any letters or numbers), and tried different spacing before retrain.py...
why is the path so machine code jibberishy... where does it come from?
so getting stuck here has been disappointing as I was aiming to follow along all the way through. please advise
Looking frwd to part 2 which should be out now I am thinking...
Looks like a URL got broken up. These long hexadecimal strings are generated by git. I hope you like part 2, too!
UPDATE: URL fixed!
As I tried in my macbook, the docker will set 2G memory by default, which is not adequate to run the training. In my laptop, the docker will restart when running the training. So I set the docker memory to 3.5G, seems everything is fine.
Thanks for the article.
Thanks for this good starting point. I should have read the comments before following along as I got tripped up by the path_to_local bit and have no idea where mine ended up so I couldn't copy in a test image to use for with classify.py
The training appeared to run fine.
I'll have to repeat it. Before I do is there a way to get a decent host and container name instead of something like: cca8370680ae
The container name will be "tensorflow" because we passed that to the "--name" argument. I don't know how to control Docker container hostnames or IDs.
|
https://opensource.com/article/17/12/tensorflow-image-classification-part-1
|
CC-MAIN-2018-39
|
refinedweb
| 2,173
| 58.69
|
How to implement the Softmax function in Python
From the Udacity's deep learning class, the softmax of y_i is simply the exponential divided by the sum of exponential of the whole Y vector:
Where
S(y_i) is the softmax function of
y_i and
e is the exponential and
j is the no. of columns in the input vector Y.
I've tried the following:
import numpy as np def softmax(x): """Compute softmax values for each sets of scores in x.""" e_x = np.exp(x - np.max(x)) return e_x / e_x.sum() scores = [3.0, 1.0, 0.2] print(softmax(scores))
which returns:
[ 0.8360188 0.11314284 0.05083836]
But the suggested solution was:
def softmax(x): """Compute softmax values for each sets of scores in x.""" return np.exp(x) / np.sum(np.exp(x), axis=0)
which produces the same output as the first implementation , even though the first implementation explicitly takes the difference of each column and the max and then divides by the sum.
Can someone show mathematically why? Is one correct and the other one wrong?
Are the implementation similar in terms of code and time complexity? Which is more efficient?
They're both correct, but yours is preferred from the point of view of numerical stability.
You start with
e ^ (x - max(x)) / sum(e^(x - max(x))
By using the fact that a^(b - c) = (a^b)/(a^c) we have
= e ^ x / (e ^ max(x) * sum(e ^ x / e ^ max(x))) = e ^ x / sum(e ^ x)
Which is what the other answer says. You could replace max(x) with any variable and it would cancel out.
From: stackoverflow.com/q/34968722
|
https://python-decompiler.com/article/2016-01/how-to-implement-the-softmax-function-in-python
|
CC-MAIN-2019-47
|
refinedweb
| 281
| 72.26
|
Index
Links to LINQ
This.
If you would like to receive an email when updates are made to this post, please register here
RSS
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Couple of questions for you:
Can a type param be both in and out so would the following be allowed:
public interface ICopier<in out T>
{
T Copy(t item);
}
Also can you explain why the need for the extra keywords? I can't imagine why but I am sure there is a reason.
thanks
Josh
I'm with Josh, please explain us why we need those in-out keywords. I can't figure out why is needed to do something that is what you expect, specially when talking about covariance
I think I understand the background. This is a feature created to support F# functional programming types. Immutable functions without side effects is a HUGE thing in functional programming. The contravariant delegates are a way of representing these functions in C#.
Basically, delegates with "in" parameters are delegates that can produce no side effects. "out" delegates are the more common garden variety delegates we all know and love.
Daedius, in this example the "out" keyword is not a "common delegate". Indeed, it is making something that actually cannot be done "covariance".
delegate void Action1<in T>(T a);
static void Main(string[] args)
{
// Covariance
Func1<Cat> cat = () => new Cat();
Func1<Animal> animal = cat;
// Contravariance
Action1<Animal> act1 = (ani) => { Console.WriteLine(ani); };
Action1<Cat> cat1 = act1;
}
What I´m really want to know, is why the compiler cannot do this automatically, simply by allowing you do covariance and contravariance without any keyword at all. I think if it was possible actually in some parts of the language why not in generics and delegates? As Charlie has said "In Visual Studio 2010 delegates will behave as expected", so if it's something we expect, why we are forced to use a special keyword to get expected behavior?
It is a limitation in the name of type safety. You are declaring that the interface or delegate can accept base or derived classes of the generic parameter by explicitly defining that said interface or delegate will either only allow T as input, or only allow T as output. The interface or delegate cannot be both.
If you are using covariance it means that the interface or delegate can output a value that is a class of the specified type parameter or derived from that class. If you pass a Func<Cat> to a method that expects a Func<Animal> it works fine since it will return a Cat, which is derived from Animal. That method can then treat the Cat as if it were an Animal without any ill effects.
If you are using contravariance it means that the interface or delegate will receive input of a value that is a class of the specified type parameter or derived from that class. If you pass an Action<Animal> to a method that expects an Action<Cat> it works fine as it will pass a Cat which is derived from Animal. The body of the called delegate can treat the Cat as if it were an Animal without any ill effects.
These roles are pretty specific and cannot be reversed. A method that takes a Func<Cat> cannot accept a Func<Animal> because the Animal that it returns might not be a valid Cat and it would fail. Similarly a method that takes an Action<Cat> cannot accept an Action<Animal> for the same reason. The new keywords really only apply to the implementers of fairly fundamental structures, such as IEnumerable<out T> or Action<in T>. Their benefit comes offering the flexibility in the consumption of types that consume those structures. You might never use those keywords, but because of them you can now do this:
// Illegal in C# 3.0
IEnumerable<Animal> animals = new List<Cat>();
"IEnumerable<Animal> animals = new List<Cat>();"
This is HUGE. I can't believe how many special interfaces I have to use to get true "programming to an interface" right now just because of this limitation.
For example, right now I have to do this:
Domain object, with CRUD or Mapper ops (much easier to code to a concrete or abstract class collection than to an interface that has no CRUD methods):
public IList<RuleState> ChildStates
get { return m_ChildStates; }
then in the interface for my runtime configurator (on the same class), I have to do this:
IList<IRuntimeRuleState> IRuntimeConfig.GetRuleStates()
List<IRuntimeRuleState> list = new List<IRuntimeRuleState>();
foreach(IRuntimeRuleState rs in m_ChildStates)
list.Add(rs);
return list;
In c# 4, I could just do:
return m_ChildStates;
Much easier to code, not to mention far more efficient.
Can't wait.
Your comment about this "acting as the programmer expects" is exactly right. I was very excited about generics, but my excitement was tempered when I found out I couldn't do:
...
private List<SomeObject> m_Field;
public IList<ISomeObject> Prop
get { return m_Field; }
As I posted previously, this covariance fix will alleviate this problem, and make us all better programmers by giving us the tools to be better programmers.
Unfortunately, it won't alleviate your problem specifically, because IList<T> cannot be covariant. The reason is that it uses T both in arguments of methods - such as "void IList<T>.Add(T)"; and in return values - such as "T IList<T>.this[int]". This means that you won't be able to write this in C# 4.0:
interface IFoo { ... }
class Foo : IFoo { ... }
IList<IFoo> list = new List<Foo>();
In fact, at a quick glance, there are only a few basic types in the FCL that would be covered by new variance declarations: Predicate<T>, Comparer<T>, Action<...> and Func<...>, IEnumerable, IEquatable, IComparable. ICollection<T> and anything further down the inheritance chain will have to remain invariant.
As a result, I doubt that we'll see "in" and "out" in application code in class declarations often (if at all). On the other hand, it will probably become good style to always use them when declaring delegate types.
I'm with int19h on this one. Since you cannot have both in and out applied to the parameter, it seems like you won't be able to use covariance/contravariance with the most interesting collections...
btw, should I assume that we won't be getting Design By Contract on C# 4.0? Why isn't this part of tha language?It's one of those things that should have been there for a long time!
One of the things that we’ll have in C# 4.0 is covariance/contravariace working on generics…I mean, sort
Really guys, this is so far away from what programming really means, ie working with real people to find out what they want and then implementing it that it's a total waste of time.
Has anyone here actually done a programming for a living (and I don't count writing programming books or similar wanky stuff as programming)
Thanks Halo_Four, you have cleared a lot of things.
So the problem here is with classes that output and input the same generic parameter, so that is why it cannot be done without the "in" and "out" keywords. Also note that as Bruce Pierson and int19h commented, unfortunately this will not cover all the expected cases, like those when the parameter is used both in and out.
OOP Sceptic I´m in a real world project right know and those discussions allow me to understand better the tool that I´m using every day. And my better understanding of the tool means better products for the client, and better support.
Thanks, int19h, for the clarification, even though it's depressing...
@OOP Sceptic
I currently have many customers using my software to actually run their manufacturing businesses (boats, trailers, clothing, and more), and it suffers from being difficult to modify to suit their needs because of the lack of flexible design patterns, and old-style procedural programming. I'm currently re-writing it using expert design pattern guidance, and I cannot believe the difference it makes to my sanity when I take the time to understand and apply these principles.
This is indeed very promising and will be helpful in many scenarios, but I'm dissapointed to see that it doesn't solve the problems with ICollection and IList.
Would it not be possible to define a new set of collection interfaces without this problem?
Or alternatively, add a mechanism that allows specifying the variance at the method level instead of at type level, so that when a generic type parameter is used for both in and out the specific use (for a method) can be explicitly specified.
It may be necessary or desired to also specify the variance on consumption, similar to how ref/out must be specified both at declaration and on invocation.
Mind you, I don't usually think about these things and Eric clearly lives in another dimension.. :)
That said, surely there must be some way to allow people to write "IList<IFoo> list = new List<Foo>();" as this is such a common pattern. I'll take compiler magic like for "int? i = null" over any "will not compile" error.
You can't allow "IList<IFoo> list = new List<Foo>();", otherwise you could end up with:
class Foo : IFoo {}
class Bar : IFoo {}
list.Add(new Bar());
The only option would be to split the interface declaration into four parts - non-generic (Clear, Count, IsReadOnly, RemoveAt), covariant (GetEnumerator, indexer), contravariant (Add, Contains, IndexOf, Insert, Remove) and invariant (CopyTo).
Note that CopyTo would have to be invariant because it accepts an array of type T. You can only pass an array of U to a method which expects an array of type T if U is derived from or equal to T, but you can only put an object of type T in an array of type U if T is derived from or equal to U. Therefore, the only type U which satisfies both conditions is the type T, and the method must be invariant.
I think that this also explains why the compiler can't automatically infer the "variance-ness" of a type parameter.
Example:
static void Fill(IFoo[] values)
values[0] = new Foo();
values[1] = new Bar();
Fill(new object[2]); // Compiler error
Fill(new Foo[2]); // ArrayTypeMismatchException
Fill(new Bar[2]); // ArrayTypeMismatchException
Fill(new IFoo[2]); // Works
And if you can follow that gibberish, you've probably had too much coffee! ;)
Sorry - I obviously meant to say that the indexer get would be covariant, while the indexer set would be contravariant.
Nice to finally see this in the language, but the choice of keywords is just horrible:
1.) we already have the out keyword for parameters, but with completly different semantics. And generic type "parameters" are similar enough to function parameters to confuse many people, especially if they are new to the language.
2.) without looking I cannot tell which one of the keywords is used for contravariance and which is used for covariance. "In" and "out" is not vocabulary I'd normally use when talking about type hierachy relations.
3.) there are far better alternatives. Why not simply use "super" and "sub" instead. It is far easier to remember that "sub T" means you can use subtypes of T and "super T" means you can use supertypes of T.
Actually, "in" and "out" are fairly obvious because they clearly outline the restrictions on the usage of a type parameter. An "in" type parameter can only be used for values that are inputs of methods - that is, non-out/ref arguments. An "out" type parameter can only be used for values that are outputs of methods - return values, and out-arguments
> Would it not be possible to define a new set of collection interfaces without this problem?
It is possible if you basically split each collection interface into three parts: covariant, contravariant, and both. E.g. for a list:
interface IListVariant
int Count { get; }
bool IsReadOnly { get; }
void Clear();
void RemoveAt(int);
interface IListCovariant<out T>
T Item[int] { get; set; }
IEnumerator<T> GetEnumerator();
interface IListContravariant<in T>
void Add(T);
bool Contains(T);
void CopyTo(T[], int);
int IndexOf(T);
void Insert(int, T);
bool Remove(T);
interface IList<T> : IListVariant<T>, IListCovariant<T>, IListContravariant<T>
And so on for all other collection types (and all other interfaces that could also be so split). So far I haven't seen any indication that this is going to happen in .NET 4.0 (at least it's not mentioned on the "what's new in 4.0" poster), and, looking at the code above, I think it is understandable :)
> Or alternatively, add a mechanism that allows specifying the variance at the method level instead of at type level, so that when a generic type parameter is used for both in and out the specific use (for a method) can be explicitly specified.
I will repeat what I said elsewhere, and just say that the proper way to enable full variance is to do what Java guys did, and use variance markers at use-site, not at declaration-site, same as Java does it. For example, here are two methods that use IList<T> differently:
// Method can take IList<T> where T is string or any base class of string
void AddString(IList<in string> list, string s)
// cannot use any methods of list that return values of type T here, only those who take arguments of type T
list.Add(s);
//list[0]; // illegal here!
// Method can take IList<T> where T is object or any class derived from object
object GetFirst(IList<out object> list, int i)
// cannot use any methods of list that take arguments of type T here, only those that return values of type T
return list[i];
//list.Add(s); // illegal here!
@ iCe and Bruce Pierson
Thanks for responding. I too have many hundreds of people running their finance businesses on my software.
I appreciate that advances in any particular technology are going to improve that technology, BUT it seems to me that all these somewhat esoteric terminologies are just solutions waiting for problems.
Maybe this new stuff will help you, but it's a little late in the day for magic solutions - surely these wacko extension merely highlight the flaws in OOP any way!
@ int19h:
CopyTo can't be contravariant, as I tried to explain above. Passing an array is equivalent to passing a ref parameter. Although the method shouldn't read from the array, there's nothing in the declaration to prevent it.
Although it would be nice if "out" parameters could be used in contravariant interfaces, I don't think the CLR would support it. As I understand it, the only difference between "out" and "ref" parameters is the C# compiler's definite assignment checks.
>As I say, contravariance is a bit confusing.
maybe the names are weird but usage is just polymorphism (some interface is expected).
I can already see this is going get missused. instead of using a factory pattern for Covariance
and aggregation to implement Contravariance. (pass you concrete object implementing an interface into another object that will make it work)
i wish there was some real world usage in these examples..
Welcome to the 47th Community Convergence. We had a very successful trip to PDC this year. In this post
> instead of using a factory pattern for Covariance and aggregation to implement Contravariance. (pass you concrete object implementing an interface into another object that will make it work)
Since covariance and contravariance in C# 4.0 will work only on interfaces (and delegates, which are semantically really just one-method interfaces), I don't see your point.
In fact, I don't understand it at all. How would aggregation help deal with the present problem that IEnumerable<Derived> cannot be treated as (i.e. cast to) IEnumerable<Base>, even though it is clearly typesafe and meaningful to do so?
@OOP: I think learning about aditions to the language is a great way for us to learn new ways to applys technology solutions through code to our business problems. While I was very skeptical of Linq at first, I have come to enjoy Linq to Objects, as a quick way to filter and sort my collections (Especially when binding to grids).
@Everyone Else: I still don't get the need for the new key words, and I'm afraid unless I sat down with an expert who could pound it into my head with a base ball bat I won't get it. Since I'm the sole developer at my company, guess I'll have to finally go and attend a community event.
Thanks
C# 4.0 Dynamic Lookup I really like the way the C# team tackled bring dynamic programming to the language
> IEnumerable<Derived> cannot be treated as (i.e. cast to) IEnumerable<Base>, even though it is clearly typesafe and meaningful to do so?
just so all can see what you mean, here is the test:
class Test
{
public Test()
{
List<Base> items = new List<Base>(this.GetItems());
}
public IEnumerable<Derived> GetItems()
yield return new Derived();
}
public class Base
public class Derived : Base
it fails to compile.
I agree semantically it 'could' compile.
On purpose during the Symantic Analysis stage the compiler won't let it compile.
Why?
You should be returning:
public IEnumerable<Base> GetItems()
so you never couple higher layers with concrete types.
not IEnumerable<Derived> which is not meaningful since the Test class only needs the Base interface.
again i wish there was some real world examples on why this is actually needed.
Running behind on everything again, picked up a nasty stomach bug that laid me out for a few days (not
There are a few new features coming out in C# 4.0. I gathered some posts that will help you to "get
As someone pointed much earlier (so I am not 1st) - the keywords are necessary - you cant now - at runtime - what is the actual generic type of generic collection, and furthermore - using "generic" cast you will loose type information which is necessary for compile time checks and leads to stupid runtime fails, of which I dreamed to forget when I first saw generics...
public class Base
{
}
public class Derived : Base
public class Tests {
public static void Test1() {
List<Derived> derivedList = new List<Derived>();
List<Base> baseList = derivedList; // sounds fair?
baseList.Add(new Base()); // wrong, collection is actually of of Derived type, and...
Derived d = derivedList[0]; // should actually make an implicit up-cast to work (and fail in this case)
}
So you cant know, at run time, what type is safe for generic cast, unless you limit this type to use "T" only in input or only in output method parameters.
전에 쓴 post에 있는 새로운 IDE 기능은 dynamic과 COM interop에 관련되어 새고, 당연히 이 밖에도 여러가지 새로 VS10에 추가 되는 IDE
Wow, that's intense stuff. Thanks for the summary. Sure, I'll learn about lambduh's, dynamic and functional C# programming features, but seriously, I haven't had OOP problems in C# where I even need to know how to spell variance, covariance and contravariance. Oh, and I have personally released hundred's of thousands of lines of pure C# in production right now for my clients, just one app right now is 557,000 loc that. I bet I could reduce the loc, but at what price, maintainability and readability? No thanks. Small focused classes lead naturally to composition, which is far superior to inheritance for most pattern implementations to get green tests and keep them green.
Nuestro buen amigo Pete que ya nos ha compartido en el paso muy buenos artículos de XNA, ahora nos comparte
|
http://blogs.msdn.com/charlie/archive/2008/10/27/linq-farm-covariance-and-contravariance-in-visual-studio-2010.aspx
|
crawl-002
|
refinedweb
| 3,315
| 58.92
|
Correspondence Analysis
Often described as “the categorical analogue to PCA”, Correspondence Analysis is a dimension-reduction technique that describes the relationship and distribution between two categorical variables.
Reading papers on the topic proved to be needlessly dense and uninformative– my lightbulb moment on this topic came when I stumbled across Francois Husson’s fantastic tutorial series on YouTube. 100% worth the watch and is where I’ll pull many of my images from.
Intuition
For starters, this analysis assumes that our data is prepared as a cross-tabulation of two categorical variables. In the illustration below, there are
L records, each with two categorical variables. This leads to a cross-tab matrix with all
I distinct
V_1 variables as rows and
J distinct
V_2 variables as columns.
from IPython.display import Image Image('images/ca_data.PNG')
More concretely, the dataset that Francois uses looks at the distribution of Nobel Prize wins by Category and Country
import pandas as pd df = pd.read_csv('nobel_data.csv', index_col='Country') df
Conditional Probabilities
Unpacking further, the explanation that finally stuck for me was deeply rooted in conditional probability. Here, the row, column, and total sums play a crucial role in computation. This allows us to start expressing conditional probabilities, predicated on overall counts (for this, he uses notation that I’ve never seen before, as I’ve highlighted below)
Image('images/ca_probs.PNG')
Ultimately, the core mechanic of Correspondence Analysis is an examination of how much our data deviates from an assumption of complete independence. This is a direct extension of the Chi Squared Test.
Image('images/ca_chi_sq.PNG')
In the context of a cross-tab, if our variables all had independence, we’d assume that the values in our rows would be distributed consistently with the proportion of the totals, and similarly for the columns.
These two graphics do a fantastic job representing this notion. For example, we can see that
Italy has a
Literature proportion way off of the mean row value, and
Economics prizes are disproportionally won by people from the
US.
display(Image('images/ca_row_profile.PNG')) display(Image('images/ca_col_profile.PNG'))
Bringing it home, we can then plot all of the conditional probabilities into a point cloud.
The point
G represents the center of gravity for the cloud.
display(Image('images/ca_row_cloud.PNG'))
However,
G has a neater intuitive meaning. In the case that our data is all independent, all of the points will just sink to the center of gravity and it will be point, not a point cloud.
display(Image('images/ca_col_cloud.PNG'))
The fact that we instead represent all of the conditional probabilities with a cloud, however, means that there is some measure of deviation from the origin,
G. We call this the interia of the point cloud.
To restate: Intertia measures the deviation from independence.
Finally, the whole purpose of Correspondence Analysis is to get our data to this point and then find the orthogonal projections that explain the most inertia.
Image('images/ca_max_intertia.PNG')
Now where have we heard this before?
Translating
The two point-clouds are generated from the same dataset, so it stands to reason that there should be some way to translate from one to the other, yeah?
For this, Correspondence Analysis borrows a $2 word from astrophysics, called barycenter, which is basically the center of mass between arbitrarily-many objects. Per wikipedia, in the
n=2 case, where the red
+ is the barycenter:
Image('images/barycenter.PNG')
As it relates to our usecase, the barycenter of a row on a given axis can be thought of as a weighted average of the column representation.
For a given derived axis
s, the columnar representation
G(j) is weighted by the sum of all conditional probabilities across those columns, then finally scaled by the
lambda value for that axis (more on this below).
Image('images/ca_barycenters.PNG')
This is a bit of a mouthful to take in, but this allows us two nice properties:
- The reverse is true– we can swap the
Gand
Fterms and this still works
- Because of this, when we the rows and columns in the new space defined by the various
saxes, the rows are closest to columns that it’s most associated and vice-versa.
Relationship to SVD
As mentioned above, the literature on Correspondence Analysis is (in my estimation) needlessly dense. For example, this pdf begins
CA is based on fairly straightforward, classical results in matrix theory
and makes a real hurry of throwing a lot of Greek at you. I might come back and revise this notebook, but for the time being, I think Francois’ explanation is all the know-how we’re going to need on this matter, save for “how do we actually find the axes?”
For this, first refer to my notes on SVD.
Then grabbing a small chunk of the equations they throw at you, I want to highlight two things:
Image('images/ca_paper.PNG')
We get the following for free:
Nand
nrepresent the crosstab matrix and total counts
- These are used to build a simple matrix of probabilities
P
rand
crepresent the row and column proportions, which both add to
1
D_rand
D_care the diagonal matrices of the row and column spaces
And so it looks like equations
A.8-10 are the nuts and bolts of representing our newfound axes
s. Moreover, it looks like we can get this if we can get the values in
A.6-7, which in turn need values
U and
V, which are typical results of doing SVD, in
A.5.
The trick to all of is answering “SVD on what?” For which, we can cleverly construct a matrix,
S, of standardized residuals. Once that sentence makes sense to you, you can close the pdf. Residuals should make you think of error, and error should mean difference between predicted and observed, and as mentioned “predicted” actually means “the assumption that everything just follows the population distribution,
P.
Armed with that intuition,
S becomes the key used to unlock the rest. Of course, there’s a handy Python package to keep all of this math straight for us.
On Our Dataset
So running CA on the dataset above, Francois plots out the points from the cross-tab, relative to the new axes he found.
Image('images/ca_scatter.PNG')
The first thing he points is distance to center of the scatter plot.
Broadly-speaking, this is a good proxy for “how dissimilar to ‘everything follows the same distribution’”.
For example,
UK is basically on the axis, and we can see below that the distribution of awards won looks VERY close to the sample distribution.
Italy, on the other hand, has won dramatically more of the green aand is thus far-flung from the origin.
Image('images/ca_uk_zoomed.PNG')
Similarly, the
Economics prizes seem disproportionally won by the
US (orange), so it’s pretty far off-origin. However, the
US does seem to win a respectable proportion of the prizes in each category.
A better example to look at is
Italy (teal). On average, they make up a tiny speck of overall awards, but because they’re a good 20% of the
Literature prize winners, their point is the furthest-right of the whole shebang.
Image('images/ca_econ_zoomed.PNG')
Metrics
Eigenvalues and Explained Inertia
So after we find our othogonal axes that best-span the point cloud, the eigenvalues that we find represent the explained Inertia of a given axis
Image('images/ca_eigen_value.PNG')
Then, because our axes are orthogonal, we can add the eigenvalues together and divide by the total inertia to get a look at how well an axis captures the overall spread of our point-cloud
Image('images/ca_avg_inertia.PNG')
An eigenvalue equaling
1 means that it accounts for a perfect separation between two blocks in the data. As a toy example, Francois cooks up a small table of taste profiles. The rows are whether a tasted sample was sweet, sour, or bitter. The columns are how they were percieved by tasters.
The first CA-generated axis helps separate our data into two distinct groups, “Sweet” and “Maybe sour or bitter”, with perfect accuracy.
Image('images/ca_eigen_val_is_one.PNG')
On the other hand, let’s look at two slightly-adjusted version of the dataset. The second axis should help us determine the difference between sour and bitter.
The data on the left is better-separated than the one on the right, and thus the
Axis 2 eigenvalue is higher and the points scattered are more spread apart.
Image('images/ca_eigen_val_isnt_one.PNG')
Total Explained Inertia
This intuition in mind, when we revisit the Nobel dataset, two things stand out:
- The explained inertia for the first axis is far less than 1, so there’s no straight-forward cut in the data
- The total explained inertia is much less than
5(a heuristic value, I suppose), so the data isn’t as well-separated as we might have thought at first glance.
Image('images/ca_revisit_nobel.PNG')
In Python
Correspondence Analysis is made pretty simple by the
prince library.
Continuing with our original dataset, we’ll follow a workflow similar to what we’d do in
sklearn
from prince import CA ca = CA(n_components=2) ca.fit(df)
CA(benzecri=False, check_input=True, copy=True, engine='auto', n_components=2, n_iter=10, random_state=None)
we’ve deconstructed the countries into two principal components and can see a good deal of separation.
%pylab inline display(ca.row_coordinates(df)) ca.row_coordinates(df).plot.scatter(0, 1);
Populating the interactive namespace from numpy and matplotlib
similarly, we can look at how the columns are mapped in this new space.
display(ca.column_coordinates(df)) ca.column_coordinates(df).plot.scatter(0, 1);
But even cooler, it’s super easy to plot the two together via the expressive
plot_coordinates() function. This looks like the images we’ve been looking at all along, just flipped across a couple axes.
ca.plot_coordinates(df, figsize=(8, 8));
Finally, we can easily inspect the various metrics that we want to pay attention to.
ca.eigenvalues_
[0.08333122262451487, 0.03744306648476863]
ca.total_inertia_
0.1522091104308082
ca.explained_inertia_
[0.5474785470374054, 0.24599753837855615]
|
https://napsterinblue.github.io/notes/stats/techniques/correspondence/
|
CC-MAIN-2021-04
|
refinedweb
| 1,690
| 50.36
|
We are very excited to announce the first release of Solution Navigator, a new tool that merges functionality from Solution Explorer, Class View, Object Browser, Call Hierarchy, Navigate To, and Find Symbol References into a single view. This view can be surfaced as a tool window or, for C# and VB, an interactive tooltip.
The Solution Navigator is included in the latest release of the Visual Studio Productivity Power Tools which is a free download available on the Visual Studio Gallery.
Let’s look at each of these features in more depth.
Expand code files to navigate to its classes, expand classes to navigate to their members, and so on
Traditionally users have had to work across multiple tool windows to gain both a file-based and class-based view of their solution. The Solution Navigator enables users to navigate from the solution level right down to local variables contained inside a method.
Note: Currently this is only available for C# and VB languages.
Search your solution, all the way down to class members
The Solution Navigator uses the ‘Navigate To’ feature to search your solution. The search results are presented as a filter in the Solution Navigator tree view, with the search term is highlighted and non-matching visible items (such as a folder containing a search result) greyed out.
Users can search the entire solution or scope the search to a single project, file, class or type; Scoping is covered later on in this post.
Filter your solution or projects to see just opened files, unsaved files, and so on
The Document Well has traditionally been the primary UI for managing open files. However, users can now easily access and search through the documents that are open, unsaved, or have been edited during the current session.
Open
Shows the files that are currently open.
Unsaved
Shows the files which have been edited but not saved.
Edited
Shows the files that have been edited during the current session.
Note: The edited filter is not the same as a pending changes list; the edited list will show a file even if an edit has been reverted.
Preview images by hovering over them, or preview rich information by hovering over code items
Solution Navigator provides a visual tooltip for most image files:
Scope the view to just the project, file or class you are working on
When working on large solutions, you might find yourself spending the majority of your time working with one or two classes in a single project. Scoping or ‘re-rooting’ the Solution Navigator tree view enables you to focus on the files that matter for your current task. Scoping is also useful when searching large solutions, by scoping to the project or type you are interested in.
To Scope the view to the current selection in the Solution Navigator, click the re-root button on the far right of the item, or use the keyboard shortcut Ctrl + -> (right arrow).
Create multiple views of your solution so you can always access the files you need
Unlike Solution Explorer, Solution Navigator enables you to create multiple instances of the tool window. This can be especially useful when working across large projects. Each tool window can have a uniquely-scoped view. These views could be focused around a single project, class or file, or even based from search results.
View related information about classes and members (such as references or callers/callees for C#)
Understanding how a particular method or class works often requires stepping through how the rest of the application interacts with it. Understanding where amethod is referenced in the solution is one of the most useful pieces of information in this task. Solution Navigator brings together some of the key relationships such as references which can exist between classes, methods, and variables and surfaces them inline.
Relationships are only seen for the root item. To see class/member relationships in the tool window, you need to re-root. The interactive tooltips are already rooted at a code item.
Note: Some relationships are available in C# only, we will be looking to increase the number of relationships and languages supported in future releases.
Relationships shown on classes:
Contains – The members defined by the current class (e.g. methods, properties, …)
References – The places where the current class is referenced in the current solution
Returned By – The methods that return instances of the current class (C# only)
Derived Types – The types that are derived from the current class (C# only)
Base Types – The types that the current class derives from (C# only)
Relationships shown on methods:
Contains – The parameters contained within the current method, plus local variables if the method is part of the current solution
References – The places where the current method is referenced in the solution
Called By – The methods that call into the current method (C# only)
Calls – The methods that are called by the current method (C# only)
Relationships shown on members:
Type – The data type of the current member
References – The places where the current member is referenced in the solution
Solution Navigator Tool Window Buttons and Commands
Solution Navigator Interactive Tooltip Buttons and Commands
The purpose of the tooltip is to surface the rich relationship information from Solution Navigator inside the editor.
- Hover over elements to see the tooltip, then click anywhere on the tooltip to switch the view to an interactive mode
- Press Ctrl+1 to open a relevant tooltip at the current cursor location
- Press Ctrl+2 to quickly navigate to any class/member in the current source file
- Click the Pin icon to promote the tooltip to a tool window.
Note: These key bindings can not currently be changed. Sorry, we’ll fix this soon!
Tooltip before expansion:
Tooltip after expansion:
Solution Navigator Settings
We realize that some of the features offered in this extension might not be to everyone’s liking, so we’ve enabled the ability to disable the following features:
- Interactive tooltip
- Re-rooting – This affects both the tool window and tooltip
- Member expansion – This affects both the tool window and tooltip
- Showing the Open, Unsaved and Edited filters
The settings for the Solution Navigator are located under Tools->Options->Productivity Power Tools->Solution Navigator.
If you wish to turn of the Solution Navigator entirely, go to Tools->Options->Productivity Power Tools-> All Extensions.
Solution Navigator Known Issues
Note that in the current release, the Solution Navigator tool window doesn’t support everything that the Solution Explorer supports. The following list details the not yet implemented features and known issues for the current release:
Known Issues:
- Opening a solution causes Solution Explorer to automatically receive focus
- Adding a new folder to the solution switches focus from Solution Navigator to Solution Explorer
- Adding a project to the solution switches focus from Solution Navigator to Solution Explorer
Not Yet Implemented Features:
Solution Navigator does not support:
- Drag and Drop of files onto, out of, or inside the Solution Navigator
- Selecting multiple files at once
- Persistence of the expansion state of items within the Solution Navigator across sessions of Visual Studio.
- Updating item icons to reflect an item being ‘cut’
- Updating folder icons to reflect an ‘open’ state
This is our first public release of the extension, and we are very happy with the feedback so far. We’ve already been working hard on additional features and will hopefully get some into the next release of the Productivity Power Tools!
Thanks for your time, and we hope you enjoy this extension.
Solution Navigator Team
- Adam Nathan – Principal Developer
- Matthew Johnson – Shell Developer
- Adrian Collier – Program Management/ User Experience
- Amy Hartwig – QA
- Sam Zaiss – User Experience
Project Contributors:
- Andrew Neil – Source Control Testing
- Srivatsn Narayanan – VB Language Model integration
Great job guys.. I've retired Solution Explorer on my machine… now the Solution Navigator holds that place!
Arun
Looks really handy. Good work!
Another issue is that it hijacks the AltGr+2 keyboard combination (AltGr apparently translates to Alt+Ctrl in VS2010) which is the "@" sign on Nordic (Danish Norwegian) keyboards. So thats pretty imparing. You can disable the Solution Navigator and use the other goodness in the productivity package though.
Looks quite interesting.
There is one thing I noticed that makes life a bit difficult. I'm using VisualSVN, which overrides the source-control status of the Solution Explorer with its own (SVN-like) icons. Now the Solution Navigator is back to displaying the original icons provided by Visual Studio. Is there a chance to get the VisualSVN icons in Solution Navigator, too?
Thanks, Michael
Very nice add-in! What is about full support for native C++ projects?
Excellent. In my VB forms I only seems to see the controls listed, not any of my methods, etc. Is that correct?
'ALT GR'+2 bring up the same window as CTRL+2.. Please make this optional, because in Denmark the @ sign is on 'ALT GR'+2. So in effect I can't have Solution Navigator enabled and make @ signs at the same time.
This is awesome!
I started using it yesterday on a large solution and it made navigating through code so much faster. Having the features of Class View, Object Browser, Call Hierarchy, Navigate To, and Find Symbol References as a tool-tip is incredibly powerful.
The tooltips pops up very quickly. Slowing it down a bit would be nice – or make it configurable. A two or three second delay would make it less distracting when moving the cursor around the code editor.
Also, i had trouble renaming a solution yesterday (content menu->rename). I couldn't do it in Solution Navigator or Solution Explorer. I had to F2 in Solution Explorer to get it to work.
I only used Solution Navigator for a day but I already love it.
Pretty amazing, one wonders why this is not the in the package and enabled by default.
I started using the Solution Navigator on my large C++ project as soon as it was released, and I have to agree with Mr. Mahendrakar: the Solution Navigator is my new default tool window where I used to keep the solution explorer.
Even without the additional language features available in C#/VB, the ability to scope the files displayed is a major time-saver when working on a large project.
I would welcome the opportunity to totally replace the Solution Explorer were it not for the few little things missing from the Solution Navigator, especially the ability to drag-and-drop files and filters around in the project hierarchy.
I'll be looking forward to what the VS team comes up with in the future for this great tool.
I'm really impressed with this. The ability to filter the tree alone is a real time saver.
Re @Feeback's comments on the tooltips in the editor popping up to quickly, I have to agree. It's slightly annoying and distracting. I'd prefer a brief delay if possible.
Hope the Solution Navigator is going to be the default in VS2012 🙂
Have to choose between changing my keyboard layout from the control panel, forgive about derived types and so… or be unable to use | and @. Currently I got those chars from the charmap and used SuperCopyPaste to keep them available… Then used those to write some code snippets… so, Thank you I lastly learned to write code snippets. If anybody wants the snippets write me to my name at gmail, but I would suggest you to learn to write them yout self :P.
I would also like to have a hotkey to the search in the navigator so that I can quickly go and search for refrences in the navigator. Just a thought.
Keep up the good work!
Hi All,
Thanks for the amazing feedback, we are very pleased with the reception of our first release and are already working hard on updates for the next release.
@Johannes Hansen, @Lars Holm Jensen and @Thearot
Thanks for your feedback on the keyboard binding issues; we are looking into releasing a fix for this issue shortly.
@Michael,
Unfortunately in cases where a provider overrides the default icons in the Solution Explorer, an update would be required from the source control provider rather than the Solution Navigator Team.
@Samsa
We would love to have full feature parity across all our languages; however this is something we are going to have to investigate for future releases.
@Brendan0
Please can you send a screen shot of this issue along with a description to snfeedback [at] microsoft.com so we can investigate further.
@Feedback
Thanks for the suggestion of adding a delay to the tooltip; we will investigate this option for future release.
Renaming a solution is a known issue we will be looking to address in a future release.
@Keith
Thanks for the suggestion; we will look into adding a shortcut which takes users directly to the search box.
Thanks for your time,
Adrian
Like other developers using a nordic keyboard layout already mentioned: not being able to type '@' (AltGr-2 on my keyboard) in code windows is a little problematic 🙂
One issue I haven't seen mentioned yet is the coloring.
I have attempted to use a light text on a dark background for most parts of the VS UI. Many parts of VS, like the Solution Explorer don't seem to be skinnable via the theme editor nor via the fonts and colors dialog, but I was pleasantly surprised that Solution Navigator did pick up the dark background I set somewhere. (As it has been set in many places in the both the fonts and colors dialog and the theme editor, I don't know which setting exactly it was picked up from). The problem: Solution Navigator doesn't pick up the the matching foreground color, so it shows as black text on a dark gray background… And I can't find the proper setting where it does pick the foreground color from, if configurable at all.
As an aside, changing the visual studio colors to light text on dark background, in an Expression-like way seems to be rather popular, maybe MS could make that a little easier and offer an out-of-the-box light-on-dark theme?
I'm also loving the Solution Navigator but I have been having a few issues related to selection synchronization between the Solution Explorer and the new Solution Navigator. When I select and item in the Navigator, it does not change the selection in the Solution Explorer as well. This leads to problems with tools that operate on a file by getting the currently selected item in the Solution Explorer – TestDriven.Net is a good example. Also, the Navigator does not respect the setting of "Track active item in Solution Explorer", and always tracks the active item.
Finally, if you could suppress the events that cause the Solution Explorer to be made visible and instead make them operate on the Navigator, I could be rid of Solution Explorer for good. Thanks for a great tool!
–Brandon
These improvements are great but I'd really like to see existing functionality fixed first. Any chance of providing a hotfix for this issue (all x3 appear to be duplicates?)
Someone mentioned its been voted the 4th highest VS bug on connect.
connect.microsoft.com/…/vsipissue-context-menus-open-in-scrolling-mode-while-there-is-place-to-show-the-whole-menu
connect.microsoft.com/…/inconsistent-behaviour-positioning-sizing-context-menus
connect.microsoft.com/…/please-avoid-scrolling-context-menus-when-vertical-space-is-available
This is a test comment — please ignore
I really like this extension , especially the Solution Navigator. However, I have some suggestions / requests:
– Would it be possible to implement a filter for edited files that are under version control? (Files that have changes based on the info of the VCS…)
– A shortcut for jumping to the solution search box (the textbox in the solution navigator) would be amazing.
– Possible bug: Sometimes, several projects are displayed in bold (marked as the active project) VS 2010 is set to mark the currently selected project as active, it seems that this confuses the navigator…
– Possible bug: On another computer the "Open" filter in the solution filter does nothing, the whole solution is showed. On my computer it works perfectly fine though…
This extension is one of the major "new features" of VS 2010 for me ;).
There's a bug in Solution Navigator that manifests itself when unloading projects. If I unload a project in a multi-project solution from Solution Explorer, all is fine. When I do the same thing in Solution Navigator, VS says "An unhandled exception has occurred" and then unloads the project anyway. Something is broken here…
Solution Navigator is very cool! It's my favorite tool.
File filter has All, Open, Unsaved and Edited buttons. I want to filter with 'Checkouted'. Using version control, I think It is very useful.
A couple issues with latest build:
– Running unit tests from right-click context menu (through CodeRush test runner OR TestDriven.net) fails on the project level. Works fine within Solution Explorer (and actually will work if you run once from Solution Explorer, then try running from Solution Navigator. It appears to carry over the context from the previous run in Solution Explorer if that helps to diagnose).
– Right click -> New Folder… doesn't automatically begin the rename process for the folder by placing the caret in the folder name with keyboard focus — this is a bit annoying. See existing Solution Explorer for the appropriate functionality.
Thanks for an otherwise solid add-on. I'll report any more issues I run into.
I love it – but I too have a context-menu issue. I am not sure if this is covered by any of the other reports..
I use the keyboard exclusively. Using the context-menu key will still make the menu appear by the mouse cursor (this behavior used to apply to setup projects in the solution explorer too. Not sure if it does in 2010).
Cheers!
What are the plans around the SCC glyphs?
I see all the SCC operations in the context menus, but none of the status glyphs to show what is checked out, modified, locked, etc.
@Bert – What SCC provider are you using? For the power tools release we only tested TFS and VSS for full fidelity UI integration. I think, for example, Ankh SVN uses a different way of inserting the SCC channel icon glyphs. We are taking a look into this.
It would be nice to have a "Expand All" button. Or this can already be accomplished somehow?
Need search results tickmarks on the scrollbars!
@Bert – We have added Ankh support. This will likely be in a future release of the power tool pack. Keep a look out.
Nice tool but is slower than solution explorer. With a medium class (~500 lines) It hangs when displaying members…
Perfect tool guys, especially the Solution navigator. I really love your Interactive tooltips feature but there is one thing that makes it really useless for me now (not your fault though). I am using Resharper and it overrides Quick view tooltip that Visual Studio offers in its own way. This is cool but it also appears right in front of your interactive tooltip making it really hard to use or even access. Ctrl+1 shortcut does not work either because the damned Resharper make its own override with another trigger and since you does not allow to remap this shortcut it cannot be fixed (and the bloody Resharper does not allow to disable its Quick view without deactivating it completely).
Can you please make the keyboard shortcut visible in Visual Studio options ASAP?
Thanks a lot.
In the latest released version of the extension, you should already be able to remap Ctrl+1 and Ctrl+2: the commands are named Edit.ShowSolutionNavigatorPopupForSelection (Ctrl+1) and Edit.ShowSolutionNavigatorPopupForFile (Ctrl+2).
Also seeing SCC overlay problems in VisualHG. All files have the plus symbol overlaid, regardless of status. There are no overlays in non-SCC'd files
See visualhg.codeplex.com/…/View.aspx. I am unsure whether it natively uses Tortoise overlays in VS or not – you'd have to ask the dev or check the source code out.
I can't even see the cotton-pickin solution navigator, much less use it. What do I need to enable viewing this "enhanced" solution explorer. I am so frustrated at all the "hype" I see around this tool, yet no documentation anywhere for simpletons like myself
There should be a command to show the Solution Navigator on the View menu.
There is a command in the View menu and you could use Ctrl+W,F
VisualHG Source-Control Status Icons
The VisualHG Overlay Icon support I used is very standard.
For this I implemented the interfaces IVsSccGlyphs and IVsSccManager2.
namespace Microsoft.VisualStudio.Shell.Interop
{
// Allows full customization of source control glyphs. (4 custom glyphs only)
public interface IVsSccGlyphs
{
// Called by the IDE to get a custom glyph image list for source control status.
int GetCustomGlyphList(uint BaseIndex, out uint pdwImageListHandle);
}
public interface IVsSccManager2 // Base source control functionality interface
{
…
/// Provide source control icons for the specified files and returns scc status /// of files
int GetSccGlyph(int cFiles, string[] rgpszFullPaths, VsStateIcon[] rgsiGlyphs, …
}
}
I hope that helps
Bernd
Very useful and powerful tool.
Just 1 small improvement that I would make it even better: The filters including Source Control Status – filter those that I have checked out.
Great extension, but having the tooltips replaced should be an option. The fact that I can't see the arguments for a function by hovering is a huge productivity minus for me.
Great tool!
One thing of note is that you cannot double-click solution folders to expand/collapse them. (you can still single-click the tiny arrow at least) I imagine this is unintended as all other folders can be expanded/collapsed by double-clicking them.
When I try to see classes tree in …aspx.cs file I see text "Unable to get code items"
What's wrong?
Is there any possibility to make your Solution Navigator to work with extensions that are hooked into Solution Explorer? Some VS extensions add their own commands to solution explorer which does not work in Solution Navigator.
This is really useful.
I will like to add a vote to have a new link next to 'Unsaved' and 'Edited' for 'Checked Out'. That would be really useful.
Otherwise great utility…
I am also getting the "Unable to get code items" so I cannot drill down any lower than the module's file.
Do you have any support / troubleshooting links?
Great tool, thank you.
We work with big solutions containing several dozens of projects, most of them unloaded though. Therefore I suggest an option to show loaded projects only.
Sadly u cant add any files to solution with D&D in Navigator. Its working only with Solution Explorer…
Great work! Adding a 'Checked Out' option next to 'All Open Unsaved Edited' for filtering would definitely be a great addition.
This is a terrific tool. No more Solution Explorer for me! I will add one thing to the wish list: that it would remember my position in the file hierarchy from session to session. Most of my work is done in a solution with 11 projects. On startup they are all expanded in Solution Navigator. I "Collapse All" and then must manually navigate down to where I was working last.
PLEASE, allow us to change the freaking background color. It has been requested million times, it is painful to use dark theme with GLOWING BRIGHT solution explorer/navigator.
SUCH A PRIMITIVE THING and it takes MS YEARS to accomplish.
WHAT IS THE POINT OF CHANGING ALL THE OTHER COLORS WHEN SOME ARE HARDCODED?
I have only 2 eyes.
Very usful Add-in. And Treeview features are excelent. How Can I get the source code of his add-in?
@pradeep: The code for this particular extension is not public. Although the Solution Navigator is a free extension for VS 2010, we don't currently plan to release its source code.
thanks for this handy addon.
i find that just having an easy way to create a vertical list of all open files is useful.
i was going to request a feature that would give us more control over exactly what files appear inside scopes — but, after a little fiddling around, i've realized that i can get just about all the functionality i want by combining the solution navigator with some creative misuses of top level solution folders.
Feature Request: would it be possible to add "status tagging" to items in the Solution Explorer/Navigator?
I don't mean source control status, but actual "what stage of maturity is this item at" status.
Here's what I mean: Say in my project I have screen1.xaml (and screen1.xaml.cs). I want to be able to right click on an item and select Set Item Status… This would then either popout another quick menu with options like: Finalized, Completed, Incomplete, Debugging, etc. Then each item would show, in addition to its normal icon, the status icon.
You could even allow the status menu to be customized, associating various text entries with a given icon type. I spend a lot of time opening files to try to remember if I'm done. I think that the //todo: is underwhelming and messy. I just want to be able to visually see at a glance which items that require attention.
Thanks Very Much!
PERFECT!!! Is the word that came to my mind, after checking it out. I have no words to express the limit to which I have liked the post.
? I get the class decomposition in C# code, but not in VB (in same solution)
Great extentsion, it really is.
Is there any way to get the solution navigator to expand to the currently open file. In resharper I can do a alt + shift + L. That'd be awseome.
Nice work.
Like GatoCat, I have to click the solution navigator's collapse all button every time I reopen a solution. It would be great if it remembered its state. Regardless, a great tool. Thanks!
I'd be excited too if I could find how to show it
@Rob You need to install the "Productivity Power Tools" extension to see it (VS2010).
On VS.2013; Edit.ShowSolutionNavigatorPopupForSelection is missing
|
https://blogs.msdn.microsoft.com/visualstudio/2010/07/20/announcing-the-solution-navigator/
|
CC-MAIN-2018-43
|
refinedweb
| 4,415
| 61.46
|
A dart wechat airkiss lib to config IOT device.
To use this plugin, add
airkiss as a dependency in your pubspec.yaml file.
dependencies: airkiss: ^1.0.0
import 'package:airkiss/airkiss.dart'; import 'package:airkiss/airkiss.dart'; void test(String ssid, String pwd) async { print('config ssid:$ssid, pwd:$pwd'); AirkissConfig ac = AirkissConfig(); var res = await ac.config(ssid, pwd); if (res != null) { print('result: $res'); } else { print( 'config failed!!! please ensure phone/pc connected to Wi—Fi[$ssid] with 2.4GHz Channel(NOT 5GHz Channel)'); } } void main() { test("SSID", "PASSWORD"); }
Add this to your package's pubspec.yaml file:
dependencies: airkiss: :airkiss/airkiss.dart';
We analyzed this package on Apr 4, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter, other
Primary library:
package:airkiss/airkiss.dartwith components:
io.
Document public APIs. (-1 points)
43 out of 43 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API.
Fix
lib/airkiss.dart. (-1.99 points)
Analysis of
lib/airkiss.dart reported 4 hints:
line 96 col 62: Use
= to separate a named parameter from its default value.
line 104 col 18: Use
= to separate a named parameter from its default value.
line 155 col 5:
Future results in
async function bodies must be
awaited or marked
unawaited using
package:pedantic.
line 164 col 15: Use isNotEmpty instead of length
airkiss.dart. Packages with multiple examples should provide
example/README.md.
For more information see the pub package layout conventions.
|
https://pub.dartlang.org/packages/airkiss
|
CC-MAIN-2019-18
|
refinedweb
| 269
| 53.27
|
When trying to import the the Tiger 2011 data into nominatim, I get the following parse error:
./utils/imports.php --parse-tiger-2011 data/tiger/ftp2.census.gov/geo/tiger/TIGER2011/EDGES
Processing 01001...
File "/usr/src/nominatim/utils/tigerAddressImport.py", line 3340
raise KeyError, 'missing FIPS code', fips
^
SyntaxError: invalid syntax
Failed parse (/usr/src/nominatim/data/tiger/ftp2.census.gov/geo/tiger/TIGER2011/EDGES/tl_2011_01001_edges.zip)
I am following the directions from the answer to this question:
Does this still work?
asked
11 Apr '13, 00:24
montanalow
40●2●2●5
accept rate:
0%
It still works. Make sure that you use python 2.x and not python 3.
answered
11 Apr '13, 14:07
lonvia
5.7k●2●53●81
accept rate:
41%
Thanks, I was use using python 3, and that was the problem. Works perfect with python 2.
I just finished the import, and it looks like FIPS codes > 60000 are not handled by tigerAddressImport.py, so they generate they same error, even though there is tiger EDGE data.
It looks like FIPS 60010 is American Somoa (not one of the 50 states), so I'm guessing this is intended behavior.
Indeed, that is a known limitation. It should only be a matter of adding the FIPS codes in tigerAddressImport.py to make it work.
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
nominatim ×640
import ×185
tiger ×30
question asked: 11 Apr '13, 00:24
question was seen: 3,939 times
last updated: 12 Apr '13, 07:04
Issue importing tiger data into nominatim
Tiger House Numbers
Uploading a small bit of new TIGER data (and 3 other unrelated questions)
[closed] OSM hang when adding More OSM files to Nominatim
Nominatim update sql error in placex_update
[closed] Tiger 2011 edges data imports problem
Import more osm files in to Nominatim
How to check Nominatim planet import execution is running in background or terminated?
Choose regions during the import of a country into nominatim database.
[closed] Whats the right way to add a second OSM file to your Nominatim instance?
First time here? Check out the FAQ!
|
https://help.openstreetmap.org/questions/21380/error-importing-tiger-2011-data-into-nominatim?sort=newest
|
CC-MAIN-2021-25
|
refinedweb
| 378
| 65.32
|
#include <fei_Dof.hpp>
Dof - mesh-degree-of-freedom.
A mesh-dof is the triple (rank, id, field).
Thus if the user chooses to give nodes a rank of 0 and the temperature field a label of 4, then the temperature at node 97 can be represented as the dof (0, 97, 4).
Notes:
1. The Dof class is templated on the two integer types LocalOrdinal and GlobalOrdinal. Ranks and Fields have type LocalOrdinal, while ids have type GlobalOrdinal. The distinction is somewhat arbitrary, but the assumption is that there may be billions (or more?) ids, but probably far fewer distinct ranks or fields. So in extreme cases a user may wish to use a different type (larger) for ids than for the rank or field.
Definition at line 40 of file fei_Dof.hpp.
constructor
Definition at line 43 of file fei_Dof.hpp.
destructor
Definition at line 47 of file fei_Dof.hpp.
|
http://trilinos.sandia.gov/packages/docs/dev/packages/fei/doc/html/classfei_1_1Dof.html
|
CC-MAIN-2014-35
|
refinedweb
| 151
| 66.74
|
Microsoft SQL Server Profiler is a helper for developers which is a client tool that comes with SQL Server. MS SQL Server Express edition does not come with SQL Profiler bundled. We use this tool to trace through queries. I mostly use this for peer testing as well as when the customer reports some bugs. Well, if you are a 'standard' SQL coder, you won't need to use these tools. May be you have already seen my article on SQL good practices.
This article is an introduction to the implementation of a 'profiler like thing' with .NET. I would like to call it 'SQL Tracer' since it is out of the scope of this page to develop all the functionalities of a SQL Profiler. I have chosen C# for the demonstration.
I was very much satisfied with the SQL Profiler which is available with Microsoft SQL Server 2000. But the one that comes with SQL Server 2005 seems a little bit slow. It inspired me to develop a fast query tracer tool.
You must have Microsoft SQL Profiler components installed in your machine. You may be asking why we need this new tool if we already have the MS SQL Profiler. Note that the one I explain here is not an alternative for MS SQL Profiler. This is a handy tool with very basic functionalities. As a result, this tool gives you fast results. More than that, this article is for educational purposes.
First... Add Reference to Microsoft.SqlServer.ConnectionInfo.
From this, we will get two namespaces:
using Microsoft.SqlServer.Management.Trace;
using Microsoft.SqlServer.Management.Common;
For this example, I would recommend a ListView control since it gives the look and feel of the real Microsoft SQL Server Profiler.
ListView
The TraceServer class acts as a representation of a new SQL Server Trace. More information is available here.
TraceServer
You need to create a .tdf file, which is a template file. You can either create a new .tdf by using Save as option from the SQL Server Profiler itself, or you can use the default ones available on your installation folder, which is usually - E:\Program Files\Microsoft SQL Server\90\Tools\Profiler\Templates\Microsoft SQL Server\80\*.tdf.
With this class, we will initialize the server host, username, etc. It usually looks like this:
ConnectionInfoBase conninfo = new SqlConnectionInfo();
((SqlConnectionInfo)conninfo).ServerName = "MyComputerNameOrIP";
((SqlConnectionInfo)conninfo).UserName = "PraveenIsMyUsername";
((SqlConnectionInfo)conninfo).Password = "MyPassword";
((SqlConnectionInfo)conninfo).UseIntegratedSecurity = false;
More information about this class is available here.
This method is used to initialize an object for reading from the trace log file or server. E.g.:
TraceServer trace = new TraceServer();
trace.InitializeAsReader(conninfo, "mytracetemplate.tdf");
InitializeAsReader causes the initialization and starting of the tracing operation.
InitializeAsReader
trace.Read() is used to read trace information from SQL Server. You can put a loop to fetch all the trace information. Like this:
trace.Read()
while (trace.Read()) {
//Statements;
}
Inside this loop, you can display status information in a ListView. The trace object contains all the needed properties.
trace
trace["EventClass"] contains information like ExistingConnection, Audit Login, Audit Logout, RPC:Completed, Trace Start etc. If you are a SQL Profiler user, then you are already familiar with these messages.
trace["EventClass"]
trace["TextData"] is the element which contains the queries which are being executed.
trace["TextData"]
Like this, we have trace["ApplicationName"], trace["Duration"] etc. also available. These elements are defined in your .tdf file. So investigate it. trace.FieldCount will give you the number of fields available. Since this article is for intermediate users and you know about fetching the values from collections etc., I will not mention it here.
trace["ApplicationName"]
trace["Duration"]
trace.FieldCount
Since trace.Read() will not give you control to do your other tasks, there is a chance you will feel like your application died. So, use Thread.
Thread
You can control the tracing by applying the trace.start(), trace.pause(), and trace.stop() methods.
trace.start()
trace.pause()
trace.stop()
Do not forget to use trace.close() after use. Standard practice anyway.
trace.close()
Unfortunately, I do not have a stable sample application to provide. I will upload it once I get.
|
http://www.codeproject.com/Articles/20173/MS-SQL-Server-Profiler-with-NET?msg=2875232
|
CC-MAIN-2015-35
|
refinedweb
| 694
| 61.12
|
PROBLEM LINK:
Practice
Source
DIFFICULTY:
Medium
PREREQUISITES:
Factorial, Trailing Zeros, Binary Search
PROBLEM:
You task is to find minimal natural number N, so that N! contains exactly Q zeroes on the trail in decimal notation. As you know N! = 1*2*...*N. For example, 5! = 120, 120 contains one zero on the trail.
EXPLANATION:
Source
To find the number of trailing zeroes in a factorial, we need to find the power of twos and fives in the prime factorization of the factorial. Like in 5!, expansion of it is, 5!=5 x 4 x 3 x 2 x 1 = 2^3 x 3^1 x 5^1, and by finding minimum number out of the number of 2s and the number of 5s, which will always be number 5s as we will always have the extra number of 2s, we can find out the number of trailing 0s in factorial.
Now how to find the power of 5 or any prime number, we will follow the following procedure. Let’s find the power of 5 in 200! 200! = 1 x 2 x 3 x … x 199 x 200 as we need to find the power of 5, we will neglect other terms and observe the numbers divisible by 5
200! = 5 x 10 x 15 x … x 195 x 200 x others.
Taking out the powers of 5 from all the numbers,
200! = 5^(200/5) x (1 x 2 x 3 x … x 39 x 40) x others
200! = 5^(40) x (1 x 2 x 3 x … x 39 x 40) x others.
Repeating the process in the inner bracket,
200! = 5^(40) x ( 5 x 10 x 15 x … x 35 x 40) x others
200! = 5^(40) x 5^(40/5) x (1 x 2 x 3 x … x 7 x 8) x others
200! = 5^(40) x 5^(8) x (5) x others.
The power of 5 in 200! = 40 + 8 + 1 = 49
If we observe, we see that at each iteration we are getting the number of powers of 5. Like, power of 5 in n! = floor(n/5) + floor(n/(5^2)) + floor(n/(5^3)) + … till we get a power of 5 greater than n after which all terms equals to 0. We can generalize this in following way:
Power of k in n! = floor(n/k) + floor(n/(k^2)) + floor(n/(k^3)) + …
We can get the number of terms by:
Terms t = log_k (n) = log(n)/log(k)
In the same way, we can find power of any prime number or any other number (with prime factorization and then following the process) using this method.
In this question, we need to find the smallest number whose factorial has the given number of zeroes Q.
Easy and Dumb way to do this is by running a loop till we find the solution and if the value of Q is small, we will be able to find the value in a reasonable time. But an optimized way is to perform binary search to the number of zeroes within the range. The Lower limit of zeroes is 1 which is in 5!. The Upper limit is 10^8 which after a hit and trial way we find that to be in 400000015!. So let l = 5 and r = 400000015.
We will make a function which returns the number of zeroes of the passed argument using above method, as we will frequently need the function.
Binary search works in case of sorted numbers and as in our case as the number increases the number of trailing zeroes in the factorial increases, we can apply binary search.
Now the way binary search works is:
- Set answer = right limit.
- Comparing the mid value of both limits. mid = (l + r) / 2. If the required value Q = the number of zeros in mid value then update the answer to be the minimum of mid and the previous answer. And set the mid value as the new right limit as this might not be the smallest possible value whose factorial has Q zeros.
- If the mid value is greater than required, the mid value becomes the new right limit as the required one will not be in the second half.
- Similarly, if mid value is less than required, the mid value becomes the new left limit.
And if the left limit will become greater than the right limit, we will break and return the answer as the smallest number whose factorial has the required zeroes.
SOLUTIONS:
Setter’s Solution
def zeros(n): p = 5 c = 0 while True: x = n//p p *= 5 c += x if not x: return c m = int(input()) inf = 10**20 ans = inf hi = inf lo = 1 while lo <= hi: mid = (hi + lo) // 2 n = zeros(mid) if n > m: hi = mid - 1 elif n < m: lo = mid + 1 else: hi = mid - 1 ans = min(ans, mid) print(ans if ans != inf else "No solution")
|
https://discuss.codechef.com/t/hw3d-editorial/67358
|
CC-MAIN-2020-40
|
refinedweb
| 832
| 78.69
|
Tutorial 43: Send a Text Message!.
DIFFICULTY
EASY
LINUX UNDERSTANDING
LITTLE
PYTHON PROGRAMMING
LITTLE
ABOUT
0MINUTES
You can copy / paste the code below if you’re having issues with typos or want a shortcut. However I recommend that you follow along in the tutorial to understand what is going on!
from twilio.rest import Client account_sid ="XXXXXXX" # Put your Twilio account SID here auth_token ="XXXXXXX" # Put your auth token here client = Client(account_sid, auth_token) message = client.api.account.messages.create( to="+#####", # Put your cellphone number here from_="+######", # Put your Twilio number here body="This is my message that I am sending to my phone!")
A Twilio account is needed for this to work. See below for a link:
|
http://thezanshow.com/electronics-tutorials/raspberry-pi/tutorial-43
|
CC-MAIN-2019-35
|
refinedweb
| 118
| 55.54
|
In this tutorial we'll build a 3D business card. We won't use Away3D, Alternativa, Yogurt3D, Sandy3D, Papervision3D or any other 3D engine built for Flash. We'll use only the 3D features of Flash Player 10.
Final Result Preview
Let's take a look at the final result we will be working towards:
Click it!
Step 1: Create new FLA
Create a new ActionScript 3.0 file.
Step 2: Edit Profile
Before editing, let's save our document as "BusinessCard3D.fla" into any folder you want.
After having saved the document, write "BusinessCard3D" into the Class field to set a document class. If you don't know what Document Class is or how it is used, you can learn from this Quick Tip.
Step 3: Creating Document Class
We entered the name of the document class, but we haven't yet created it.
In the Profile section click the little pen icon near the "BusinessCard3D".
In this tutorial we'll use Flash Professional. Click OK button and you'll see a new ActionScript file in front you. You'll see a simple class:
package { import flash.display.MovieClip; public class BusinessCard3D extends MovieClip { public function BusinessCard3D() { // constructor code } } }
Remove the "// constructor code" line and save this as "BusinessCard3D.as" into the same folder which contains "BusinessCard3D.fla".
Step 4: Import Card Textures
You'll need two visuals to build a business card. One of them is for the front and the other one is for the back side of the card. I designed some minimal cards for this tutorial:
Basically, copy these images and paste them into your Flash document. They will be added to the scene automatically. Now remove them and open the Library panel:
We need to set up their linkage names so that we can use them in runtime. This means, we'll export them for actionscript. There is a very quick way to do this.
By default, Linkage section of the images are empty. Click the blank are of the Linkage section of CardBack.png:
Having done that, enter "CardBack":
Do the same for the CardFront.png image. After you've entered the linkage names, the Library panel should look like this:
Yes. Now the fun part. We are now ready to begin coding :)
Step 5: Setting Imports
First we import some other classes that we'll use in the following steps:
import flash.display.Bitmap; import flash.display.DisplayObject; import flash.display.Sprite; import flash.events.Event; import flash.geom.Point;
Insert these lines between the
package { and
public class BusinessCard3D extends Sprite lines.
Step 6: Setup Variables
After importing classes, let's set up our variables. Insert these lines just above the
public function BusinessCard3D() line:
private var businessCard:Sprite private var frontHolder:Sprite private var backHolder:Sprite private var frontTexture:Bitmap private var backTexture:Bitmap private var p1:Point private var p2:Point private var p3:Point private var p1_:Point = new Point(0,0) private var p2_:Point = new Point(100,0) private var p3_:Point = new Point(0,100)
As you can guess
businessCard holds our other two cards. It's the main holder.
frontHolder holds the
frontTexture, backHolder holds the
backTexture.
frontTexture and
backTexture are the Bitmaps from library.
We could use just one main holder and add images into it. But the problem with that is it may confuse beginners. Since we'll rotate the back side of the card by 180 degrees and since the registration point of the Bitmap class is top-left, we would also have to change its
x property. By adding another holder we only have to change its rotation.
Step 7: Setting Images/Textures
After setting up the variables, let's write our first function. It basically gets images from the library as BitmapData objects, creating
frontTexture and
backTexture Bitmaps from them.
public function getTextures() { frontTexture = new Bitmap(new CardFront(0,0)) backTexture = new Bitmap(new CardBack(0,0)) }
First we get the CardFront image by writing:
new CardFront(0,0)
This is the only way to get a BitmapData of any image from Library. We can't use only BitmapData. If we had been using a 3D engine then it would probably be enough, but with native Flash 3D we need to use a Bitmap object, so we'll create a Bitmap object from the BitmapData.
new CardFront(0,0) returns us a BitmapData and that BitmapData is used in Bitmap to create
frontTexture. We do the same for
backTexture and our textures are ready.
Step 8: Adding Textures into Holders
Now we'll write our second function. This function builds our holders and adds our textures into holders:
public function addIntoHolders() { businessCard = new Sprite() frontHolder = new Sprite() backHolder = new Sprite() frontHolder.addChild(frontTexture) backHolder.addChild(backTexture) businessCard.addChild(frontHolder) businessCard.addChild(backHolder) addChild(businessCard) }
As you see, we first create new Sprites which are the perfect choice for holder purposes. Then we add our textures into texture holders. Then we add these texture holders into the main holder.
Lastly we add the main holder to the scene, onto the stage. We'll use the main holder as a business card.
Step 9: Initializing the Cards
We need to change the rotation and x,y coordinates of the cards.
public function initCards() { backHolder.rotationY = 180 frontTexture.x = -frontTexture.width/2 frontTexture.y = -frontTexture.height/2 backTexture.x = -backTexture.width/2 backTexture.y = -backTexture.height/2 }
First we rotate the back side of the card by 180 degrees. Then we set the positions of the both cards. This is actually a simple trick; we actually set the registration point of the holder of the cards to its center. This is because of the perspective of default 3D scene in our document.
Step 10: Front Facing
This is arguably the most difficult step in our tutorial. We are building a business card and when we look the front side of the card, the back side of the card shouldn't be seen. How can we do this? We can maybe write some
if conditions by using rotations of the main holder... but there is an easier way.
Imagine that we have two red points and one blue point on a surface. Now imagine that we have an infinite line which passes through the two red points. This line divides the surfaces into two sides. Check out the image below:
As you see, blue point has two chances. It can be on the side of green or on the side of yellow. If we can figure out where the blue point is then we can solve our problem.
Step 11: How is This Related to 3D?
Now let's talk about the 3D.
In this image we have a 3D plane. Imagine that it's rotated in the Y-axis a bit (so the edge on your left is further away from you than the edge on your right). Let's put red points and a blue point on the corners of the plane.
Do you see the infinite line? Check out image below:
It's actually the same as the first image. If the blue point now goes to the other side of the line, it means that the other side of the plane is being seen. Therefore, using the positions of just three points, we can determine which face of the plane is towards us.
This method is used in Away3D, PaperVision, Yogurt3D, Alternativa and other engines and basically improves the performance.
For this method we will use a function:
public function isFrontFacing(displayObject:DisplayObject):Boolean { p1 = displayObject.localToGlobal(p1_); p2 = displayObject.localToGlobal(p2_); p3 = displayObject.localToGlobal(p3_); return Boolean((p2.x-p1.x)*(p3.y-p1.y) - (p2.y-p1.y)*(p3.x-p1.x) > 0); }
This function creates three points in the card (our plane). And then it returns us the location of the third point (blue one in illustrations). If it returns
true then it means we're seeing the front side of the card (plane). If not, it means that we're seeing the back side of the card (plane).
Step 12: Rendering
Now, we finally write our last function. This function basically rotates our business card and checks the visibility of the faces.); }
The first two lines set the position of the main holder to center. This is because of the perspective of default 3D scene in our document. Then we rotate our main holder by using mouse coordinates. We add a traditional and simple smooth effect for it. The last two lines make cards visible when we shoud see them.
Step 13: Calling all Functions
We are ready. Let's call our functions in order:
public function BusinessCard3D() { getTextures() addIntoHolders() initCards() addEventListener(Event.ENTER_FRAME, render) }
We also add an
ENTER_FRAME event to trigger the render function every frame.
Step 14: Test Movie
Finally we are ready to test our movie.
Move the mouse and you will see that the business card will be rotated too. We tested our card. Now let's go a bit further.
Most of you, and I personally, think that mouse rotations are cooler but, from my experiences with 3D, they can confuse users. We'll therefore convert this to a simpler animation. When we click the card it'll rotate itself.
Step 15: Get Tweener
First, for our animation we'll use Tweener. It's really simple.
So, download the latest Tweener version from code.google.com. I'm using version 1.33.74, ActionScript 3 (Flash 9+).
Extract the ZIP file and move the "caurina" folder to the folder that contains our Flash document.
Step 16: Import Tweener and MouseEvent
Our first lines were about importing classes. We'll import Tweener and also MouseEvent.
import flash.display.Bitmap; import flash.display.DisplayObject; import flash.display.Sprite; import flash.events.Event; import flash.geom.Point; import caurina.transitions.Tweener import flash.events.MouseEvent
Step 17: New Rendering
In our
render() function, the card's rotation was directly related to the mouse coordinates. But we don't want this now. We'll basically click the card and it'll turn.
So, remove the highlighted lines:); }
Step 18: Adding new Variable
We'll add a new variable,
frontFace. Its type is Boolean. When the user clicks the card we'll rotate our card to 180 or to 0 depending on the value of
frontFace.
private var frontFace:Boolean = true
Step 19: MouseEvent Handler
Now we are writing our final function. When we click the card this function will be triggered.
public function onDown(e:MouseEvent) { if(frontFace) { Tweener.addTween(businessCard,{rotationY:180,time:1}) frontFace = false }else{ Tweener.addTween(businessCard,{rotationY:0,time:1}) frontFace = true } }
We first look at the
frontFace variable. If it is
true then it means that we are currently looking the front side of the card. If it's
false then it means we are looking the back side of the card.
When we are looking at the front side of the card, basically we say "rotate it to 180 degrees", so we can see the back side of the card. We use the same idea when we are looking the back side of the card (in which case, we rotate it to 0 degrees).
Step 20: Add MouseEvent
Our final line is to add a
MouseEvent listener to trigger the
onDown() function we just wrote. We are adding it to our business card. You could even add it to the stage.
businessCard.addEventListener(MouseEvent.MOUSE_DOWN, onDown)
Step 21: Test the Movie
Test your movie and click the card. Maybe you can write a funky "Click to rotate my business card ;)" sentence on your visuals :)
Conclusion
In this lesson we learnt how to build a 3D two-sided plane by using the native Flash Player 10 3D API and ActionScript 3.0. First we controlled it by using mouse coordinates. Then we switched to click-based control so as not to confuse users.
As you see the capabilities of the 3D feature in Flash Player are not perfect, but we can always formulate solutions and can build simple 3D dynamic animations without any third-party engine.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
|
https://code.tutsplus.com/tutorials/building-a-3d-business-card-with-pure-as3--active-6641
|
CC-MAIN-2020-50
|
refinedweb
| 2,049
| 67.25
|
30. CO2 Gas Sensor(EF05030)
30.1. Introduction
The higher the CO2 concentration is, the lower the output voltage would be. The CO2 probe is made with industrial grade which is high allergic to CO2 and anti-interference to alcohol and CO.
30.2. Characteristic
Designed in RJ11 connections, easy to plug.
30.3. Specification
30.4. Outlook
30.5. Quick to Start
30.5.1. Materials Required and Diagram
Connect the CO2 sensor to J1 port and the OLED to the IIC port in the Nezha expansion board as the picture shows.
30.6. MakeCode Programming.
30.6.2. Step 2
30.6.3. Code as below:
30.6.4. Link
Link:
You may also download it directly below:
30.6.5. Result
The detected value of the CO2 Gas sensor display on the OLED screen.
30.7. Python Programming
30.7.1. Step 1
Download the package and unzip it: PlanetX_MicroPython
Go to Python editor
We need to add enum.py and CO2.py for programming. Click “Load/Save” and then click “Show Files (1)” to see more choices, click “Add file” to add enum.py and CO2.py from the unzipped package of PlanetX_MicroPython.
30.7.2. Step 2
30.7.3. Reference
from microbit import * from enum import * from co2 import * co2 = CO2(J1) while True: display.scroll(co2.get_co2())
30.7.4. Result
The detected value of the CO2 Gas sensor display on the micro:bit.
|
https://www.elecfreaks.com/learn-en/microbitplanetX/Plant_X_EF05030.html
|
CC-MAIN-2022-27
|
refinedweb
| 241
| 69.58
|
When javax.naming.communicationexception will be thrown in RMI?
In java API, it is given that this exception will be thrown when the client is unable to communicate with the naming services.The reason for this exception may be due to server or client side failures.
In this post we will see how server-side failure causes this exception in RMI .
I guess you know about RMI(Remote Method Invocation) through which we can able to access the methods of a remote object. To execute an RMI application we need these components like,
1.Server
2.Client
3.Stub
4.Skeleton
5.RMI registry
Here we are going to focus on the stub,
what is meant by a stub?
It is a client-side object used to communicate with the server-side object(skeleton). The client needs to get a stub object.
From where the client gets its stub object?.
The client will get the stub object from the RMI registry, where the server would have already registered a remote object with a name(RMI url).The client will use the name(RMI url) to get the stub object.
This problem arises when the server is unable to register( or bind) an object to a name (RMI url).
See the following example to understand about this exception.
" The reason for this problem is that,when registering a remote object in the RMI registry we need to specify the "path" of the remote interface(in our example - NameCollections) implemented by the object and other components needed by that object, if any of the components needed is not available in the given path then this exception will be thrown."
See the following example to understand about this exception.
Program:
Prog1:
import java.rmi.*; public interface NameCollections extends Remote { public String getName(int index)throws RemoteException; }
Prog2:
import java.rmi.*; import java.util.*; import java.rmi.server.*; class NameStorage extends UnicastRemoteObject implements NameCollections { private Map
name; public NameStorage()throws RemoteException { name=new HashMap (); name.put(1,"Ganesh"); name.put(2,"jazz"); } public String getName(int index)throws RemoteException { String str=name.get(new Integer(index)); return str; } }
Prog 3:); System.out.println("Clients can invoke the methods...."); } }
Output:
Screen1:
Screen2:
Here the NameServer program tries to register a remote object with the RMI url and when registering the object as i have said before we need to specify the path for required files. Here i have tried to look for files in port:8080,but see what happens.
Have a look at the screen1, there i had executed a command like the following
java -cp . NanoHTTPD 8080 , implies that the server NanoHTTPD has to serve the files in the port 8080, but if you see the screen 1 it says that the server is serving files from port 80.
I didn't noticed that and i have given the path as codebase= to look for files that's why i got this error?.
So i have to change the command as follows codebase= to resolve the above problem. I don't know is there any problem in NanoHTTPD server or in my laptop.
"But an important thing to be noted here is the path that you provide should exactly point to the location where the necessary files are residing in the server."
In some cases, if you left the trailing slash in the command as given in the below line
codebase = then you would get this error.
|
http://craftingjava.blogspot.com/2012/10/javaxnamingcommunicationexception-in-rmi.html
|
CC-MAIN-2017-51
|
refinedweb
| 574
| 62.68
|
Name | Synopsis | Description | Usage | Attributes | See Also
#include <stdio.h> #include <stdio_ext.h> size_t __fbufsiz(FILE *stream);
int __flbf(FILE *stream);
size_t __fpending(FILE *stream);
void __fpurge(FILE *stream);
int __freadable(FILE *stream);
int __freading(FILE *stream);
int __fsetlocking(FILE *stream, int type);
int __fwritable(FILE *stream);
int __fwriting(FILE *stream);
void _flushlbf(void);
These functions provide portable access to the members of the stdio(3C) FILE structure.
The __fbufsize() function returns in bytes the size of the buffer currently in use by the given stream.
The __flbf() function returns non-zero if the stream is line-buffered.
The __fpending function returns in bytes the amount of output pending on a stream.
The __fpurge() function discards any pending buffered I/O on the stream.
The __freadable() function returns non-zero if it is possible to read from a stream.
The __freading() function returns non-zero if the file is open readonly, or if the last operation on the stream was a read operation such as fread(3C) or fgetc(3C). Otherwise it returns 0.
The __fsetlocking() function allows the type of locking performed by stdio on a given stream to be controlled by the programmer.
If type is FSETLOCKING_INTERNAL, stdio performs implicit locking around every operation on the given stream. This is the default system behavior on that stream.
If type is FSETLOCKING_BYCALLER, stdio assumes that the caller is responsible for maintaining the integrity of the stream in the face of access by multiple threads. If there is only one thread accessing the stream, nothing further needs to be done. If multiple threads are accessing the stream, then the caller can use the flockfile(), funlockfile(), and ftrylockfile() functions described on the flockfile(3C) manual page to provide the appropriate locking. In both this and the case where type is FSETLOCKING_INTERNAL, __fsetlocking() returns the previous state of the stream.
If type is FSETLOCKING_QUERY, __fsetlocking() returns the current state of the stream without changing it.
The __fwritable() function returns non-zero if it is possible to write on a stream.
The __fwriting() function returns non-zero if the file is open write-only or append-only, or if the last operation on the stream was a write operation such as fwrite(3C) or fputc(3C). Otherwise it returns 0.
The _flushlbf() function flushes all line-buffered files. It is used when reading from a line-buffered file.
Although the contents of the stdio FILE structure have always been private to the stdio implementation, some applications have needed to obtain information about a stdio stream that was not accessible through a supported interface. These applications have resorted to accessing fields of the FILE structure directly, rendering them possibly non-portable to new implementations of stdio, or more likely, preventing enhancements to stdio that would cause those applications to break.
In the 64-bit environment, the FILE structure is opaque. The functions described here are provided as a means of obtaining the information that up to now has been retrieved directly from the FILE structure. Because they are based on the needs of existing applications (such as mh and emacs), they may be extended as other programs are ported. Although they may still be non-portable to other operating systems, they will be compatible from each Solaris release to the next. Interfaces that are more portable are under development.
See attributes(5) for descriptions of the following attributes:
fgetc(3C), flockfile(3C), fputc(3C), fread(3C), fwrite(3C), stdio(3C), attributes(5)
Name | Synopsis | Description | Usage | Attributes | See Also
|
http://docs.oracle.com/cd/E19082-01/819-2243/6n4i0991h/index.html
|
CC-MAIN-2014-23
|
refinedweb
| 585
| 53.61
|
The other day, I emailed Darrell about SkeletoNUnit.
>Know anything about this?>>>>Looks interesting, I remember you were talking about NUnit addins, and>got me thinking of an automatic unit-test generator. Should be fairly>easy to make one, so I thought for sure there's one out there.>>I know it's backwards to generate unit tests, but I can see the>advantages, especially with legacy code.
Anyhow, he didn't know of anything else off-hand, and SkeletoNUnit seems, like it's name suggests, to be a dead project... So, I'm asking if anyone knows of anything else out there that will do this? Basically, I'm looking for something that will automate the generation of NUnit unit tests.
I've tried .TEST from Parasoft, (which does generate NUnit tests) but I couldn't get it to work fully. It would choke on big assemblies, and I couldn't figure a way to generate tests for one class at a time. I also can't justify the price tag, when VS2005 may do much of this stuff and make .TEST obsolete..
-Brendan
[Advertisement]
san,
I just posted a lame manual on the project page ().
Hope this answers your question.
Thanks for you interests :)
I am having problems with the add in for vs2005. It is giving me a null reference exception when I select generate tests. Any idea what might be going on?
Hi,
I am getting an exception while Visual Studio 2003 and 2005 is loading the Addin. Can you please help me out.
Thanks in advance
I just saw another NUnit Test Generator. The price is really reasonable. It works too.
Pingback from automatic Unit test generation « Amr’s Weblog
For NUnitGenAddIn one cause of null reference is lack of namespace in classes, add one or:
replace any occurance of:
string targetNamespace = (targetClass.Parent as CodeNamespace).FullName;
with:
string targetNamespace;
if ((targetClass.Parent as CodeNamespace) == null) targetNamespace = "";
else { targetNamespace = (targetClass.Parent as CodeNamespace).FullName; }
And my short review cause i'm retyping this: I think i'd prefer generation of just empty stubs to all the stuff he tries to generate. The add-on works well tho after that namespace problem.
Have you looked at Doubler by Jay Flowers?
jayflowers.com/.../index
|
http://codebetter.com/blogs/brendan.tompkins/archive/2004/09/17/25952.aspx
|
crawl-002
|
refinedweb
| 377
| 67.76
|
Tom Anderson wrote: > On Mon, 19 Sep 2005, Brett Hoerner wrote: > >> Wouldn't the standard idiom be to actually put the code under the >> if-name, and not make a whole new main() function? > > Yes. > > The nice thing about the main() function, though, is that you can do the > most basic argument parsing in the if-block. Like so: > > def powers(n): > m = 1 > while True: > yield m > m = m * n > > def main(n, limit): > for power in powers(n): > if (power > limit): break > print power > > import sys > > if (__name__ == "__main__"): > main(int(sys.argv[1]), int(sys.argv[2])) > > That serves as a sort of documentation on the arguments to the script, and > also makes it easier for other scripts to reuse the main logic of the > program, since they don't have to package parameters up as a string array. > It is more verbose, though. > >> I'm not sure I see the reason behind main(), couldn't that also >> interfere with other modules since main() seems like it might be common, >> not sure how it would work as I'm pretty new to Python myself. > > The two mains would be in different namespaces, so they wouldn't conflict. > >> from script import * > > Don't do that. 'from script import x' is, IMNERHO, bad practice, and 'from > script import *' is exceptionally bad practice. I know a lot of people do, > but that doesn't make it right; namespaces are there for a reason. > > tom > I haven't a clue what all this means, but it looks important ! lol Thanks for the headsup, will take note of what you've said. Incidentally, at work my main programming platform is VisualStudio .Net, and I never import the children of namespaces so hopefully this practice I have will be carried over to Python.
|
https://mail.python.org/pipermail/python-list/2005-September/319524.html
|
CC-MAIN-2014-10
|
refinedweb
| 300
| 68.2
|
Learning Clojure/Data Types< Learning Clojure
There are a few notable things to say about all of Clojure's types:
- Clojure is implemented in Java: the compiler is written in Java, and Clojure code itself is run as Java VM code. Consequently, data types in Clojure are Java data types: all values in Clojure are regular Java reference objects, i.e. instances of Java classes.
- Most Clojure types are immutable, i.e. once created, they never change.
- Clojure favors equality comparisons over identity comparisons: instead of, say, comparing two lists to see if they are the very same object in memory, the Clojure way is to compare their actual values, i.e. their content. Most languages (including Java) don't do things this way because inspecting the values of deeply structured objects is costly, but Clojure makes it cheap: when created, a Clojure object keeps around a hash of itself, and it's this hash which is compared in an equality comparison rather than actually inspecting the objects; this hash suffices as long as the compared structures are entirely immutable. (Watch out for cases of mutable Java objects stored in immutable Clojure collection objects. If the mutable object changes, this won't be reflected in the collection's hash.)
Contents
NumbersEdit
Java includes wrapper reference types for its primitive number types, e.g. java.lang.Integer "boxes" (wraps) the primitive
int type. Because every Clojure function is a JVM method expecting Object arguments, Java primitives are usually boxed in Clojure functions: when Clojure calls a Java method, a returned primitive is automatically wrapped, and any arguments to a Java method are automatically unwrapped as necessary. (However, type hinting allows non-parameter locals in Clojure functions to be unboxed primitives, which can be useful when you're trying to optimize a loop.)
Clojure uses the classes java.lang.BigInt and java.lang.BigDecimal for arbitrary precision integer and decimal values, respectively. Special versions of the Clojure arithmetic operations (+', -', *', inc', and dec') intelligently return these kinds of values as necessary to ensure the results are always fully precise.
Some rational values simply can't be represented in floating-point, so Clojure adds a Ratio type. A Ratio value is a ratio between two integers. Written as a literal, a Ratio is two integers with a slash between them, e.g.
23/55 (twenty-three fifty-fifths).
Clojure arithmetic operations intelligently return integers or ratios as necessary, e.g. 7/3 plus 2/3 returns 3, and 11 divided by 5 returns 11/5. As long as your calculations involve only integers and ratios, the results will be mathematically fully accurate, but as soon as a floating-point or BigDecimal value enters the mix, you'll get floating-point or BigDecimal results, which may lead to results which are not mathematically fully accurate, e.g. 1 divided by 7 returns 1/7, but 1 divided by 7.0 returns 0.14285714285714285.
StringsEdit
A string in Clojure is simply an instance of java.lang.String. As in Java, string literals are written in double quotes, but unlike in Java, string literals may span onto multiple lines.
CharactersEdit
A java.lang.Character literal is written as
\ followed by the character:
\e \t \tab \newline \space
As you can see, whitespace characters are written as words after the \.
BooleansEdit
The literals
true and
false represent the values java.lang.Boolean.TRUE and java.lang.Boolean.FALSE, respectively.
NilEdit
In most Lisp dialects, there is a value semi-equivalent to Java
null called
nil. In Clojure,
nil is simply Java's
null value, end of story.
In Java, only
true and
false are legitimate values for condition expressions, but in Clojure, condition expressions treat
nil as having the truth value false. So whereas
!null ("not null") is invalid Java, the Clojure equivalent
(not nil) returns
true.
FunctionsEdit
A function in Clojure is a type of object, so a Clojure function can not only be invoked but can also be passed as an argument. As Clojure is a dynamic language, Clojure function parameters are not typed---arguments of any type can be passed to any Clojure function---but a Clojure function has a set arity, so an exception is thrown if you pass a function the wrong number of arguments. However, the last parameter of a function can be declared to accept any extra arguments as a list (like "variable arguments" in Java) such that the function accepts n or more arguments.
VarsEdit
Var is one of the few mutable types in Clojure. A Var is basically a single storage cell for holding another object---a collection of one, basically.
A single Var can actually constitute multiple references: a root binding (a binding visible to all threads) and any number of thread-local bindings (bindings each visible to a single thread). When the value of a Var is accessed, the binding accessed may depend upon the thread doing the access: the value of the Var's thread-local binding is returned if the Var has a thread-local binding for that thread; otherwise, the value of the Var's root binding value (if any) is returned.
Typically, all global functions and variables in Clojure are each stored in the root binding of a Var. Because a Var is mutable, we can change the Var's value to monkey-patch the system as it runs. For instance, we can substitute a buggy function with a fixed replacement. This works because, in Clojure, a compiled function is bound to the Vars holding the functions it invokes, not the functions themselves, nor the names used to specify the Vars; since a Var is mutable, the function(s) called by a function can change without redefining the function.
Local parameters and variables in Clojure are immutable: they are bound at the start of their lifetime and then never bound again. Sometimes, however, we really do want mutable locals, and Vars with thread-local bindings can serve this purpose.
Thread-local bindings also allow us to monkey-patch just for the span of a local context. Say we have a function cat which calls a function stored in a Var; if a function goat is root-bound to the Var, then cat will normally call goat; however, if we call cat in a scope where we have thread-locally bound a function moose to that Var, then cat will invoke moose instead of goat.
NamespacesEdit
You should organize your code into namespaces. A Clojure namespace is an object representing a mapping of symbol values to Var and/or java.lang.Class objects.
- A Var can either be referred or interned in a namespace: the difference is that a Var can only be interned in one namespace but can be referred in any number of namespaces. In other words, the namespace in which a Var is interned is the namespace to which it "really" belongs.
- A Class can only be referred, not interned, in namespaces. When a namespace is created, it automatically includes refers to the classes of java.lang.
In a sense, namespaces themselves live in one global namespace: a namespace name is unique to one single namespace, e.g. you never have more than one namespace named foo.
When Clojure starts, it creates a namespace called clojure in which it maps the symbol *ns* to a Var which is used to hold "the current namespace". Then, Clojure runs a script called core.clj, which interns in clojure many standard functions, including functions for manipulating the current namespace, such as:
- in-ns sets the current namespace to a particular namespace (manipulating clojure/*ns* directly is frowned upon).
- import refers Class objects into the current namespace.
- refer refers the interned Vars of another namespace into the current namespace.
SymbolsEdit
In Lisp, what are normally called identifiers in other languages are called symbols. A symbol, however, is not just a name seen by the compiler but rather a kind of value, a string-like kind of value---i.e. a sequence of characters. As a symbol is a value, a symbol can be stored in a collection, passed as an argument to a function, etc., just like any other object.
A symbol can only contain alphanumeric characters and
* + ! / . : - _ ? but must not begin with a numeral or colon:
rubber-baby-buggy-bumper! ; valid j3_!:7 ; valid HELICOPTER ; valid +fiduciary+ ; valid
3moose ; invalid rubber baby buggy bumper ; invalid
Symbols containing a
/ are namespace qualified:
foo/bar ; a symbol qualified with the namespace name "foo"
Symbols containing
. are treated specially at evaluation time, as we'll see.
CollectionsEdit
A key feature of Clojure is that its standard collection types---lists and hashmaps, mainly---are all persistent. A persistent collection is an object which is immutable but from which producing a new collection based on the existing collection is cheap because the existing data needn't be copied. For instance, the operation which appends an element to a persistent list does not actually modify the list but rather returns a new list which is the same as the original but with an extra element; this new list is created cheaply because it mostly requires just creating a new node and linking it to the already existing list nodes, which are now shared between the two lists. Both the original collection and the new collection have the same performance characteristics.
- Lists
The Clojure persistent list type is a singly-linked list and is expressed as a literal in parentheses:
(53 "moo" asdf) ; a list of three elements: a number, a string, and a symbol
- Vectors
Singly-linked lists are often inappropriate, performance-wise, so Clojure includes a type it calls vector. A Clojure vector is an ordered, one-dimensional sequence like a list, but a vector is implemented as a hashmap-like structure such that index look up times are O(log32 n) instead of O(n). A vector is expressed as a literal in square brackets:
[53 "moo" asdf] ; a vector of three elements: a number, a string, and a symbol
- Hashmaps
A hashmap is expressed as a literal in curly braces such that each group of two arguments is a key-value pair:
{35 "moo" "quack" 21} ; a hashmap with the key-value pairs 35 -> "moo" and "quack" -> 21
- Sequence
A sequence is not an actual collection type but an interface to which list, vector, hashmap, and all other Clojure collection types conform. A sequence supports the operations first and rest: first retrieves the first item of the collection while rest retrieves a sequence of all the remaining items. As we'll see, sequences support a large number of operations built upon these two fundamental operations.
(When a sequence is produced from a map, first means retrieving a single pair of the map as a vector; the pair returned is effectively random as far as the programmer is concerned. The rest of a map-based sequence is the sequence of all remaining pairs as vectors.)
KeywordsEdit
A keyword is a variant of a symbol, distinguished by being preceded by a colon:
:rubber-baby-buggy-bumper! ; valid :j3_!:7 ; valid :HELICOPTER ; valid :+fiduciary+ ; valid
Keywords exist simply because, as you'll see, it's useful to have names in code which are symbol-like but not actually symbols. Keywords are by default not namespace-qualified. However, in some cases it may be useful to generate a keyword that is namespace-qualified so as to avoid name clashes with other code. For that purpose, one can either qualify the namespace explicitly or type a symbol preceded by two colons:
::gina ; equivalent to :adam/gina (assuming this is in the namespace "adam") ;; in the REPL, after (in-ns 'adam) and (clojure.core/refer 'clojure.core) adam=> (namespace :gina) ; no namespace nil adam=> (namespace ::gina) "adam" adam=> (namespace :adam/gina) "adam"
Note: There is a caveat with programmatically generated keywords regarding namespaces. One can generate a keyword that looks like it is part of a namespace, but
(namespace) will return
nil:
; use (namespace) to see what the namespace of the returned keywords is user=> (keyword "test") ; a keyword with no namespace :test user=> (keyword "user" "test") ; a keyword in the user namespace :user/test user=> (keyword "user/test") ; a keyword that has no namespace but looks like it does! :user/test
MetadataEdit
Metadata is data describing other data. A Clojure object can have a single other object (any object implementing IPersistentMap) attached to it as metadata, e.g. a Vector can have a hashmap attached to it as metadata.
Attaching metadata to an object does not modify the object but rather creates a new object---effectively, an object with different metadata is a different object. However, equality comparisons ignore metadata.
|
https://en.m.wikibooks.org/wiki/Learning_Clojure/Data_Types
|
CC-MAIN-2018-09
|
refinedweb
| 2,119
| 50.26
|
Hi On Fri, Apr 06, 2007 at 01:43:48AM +0200, V?ctor Paesa wrote: > Hi, > > The attached patch provides a list of the valid AMR bitrates as > part of the error message if a wrong one is used. > > Regards, > V?ctor > Index: ffmpeg/libavcodec/amr.c > =================================================================== > --- ffmpeg/libavcodec/amr.c (revision 8629) > +++ ffmpeg/libavcodec/amr.c (working copy) > @@ -86,6 +86,9 @@ > #include "amr_float/interf_enc.h" > #endif > > +#define NB_BITRATE_UNSUPPORTED "bitrate not supported: use one of 4.75k, 5.15k, 5.9k, 6.7k, 7.4k, 7.95k, 10.2k or 12.2k\n" > +#define WB_BITRATE_UNSUPPORTED "bitrate not supported: use one of 6.6k, 8.85k, 12.65k, 14.25k, 15.85k, 18.25k, 19.85k, 23.05k, or 23.85k\n" why not static const: <>
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2007-April/021329.html
|
CC-MAIN-2013-48
|
refinedweb
| 126
| 70.8
|
I did research and tried the code below. It almost works but I only want the consecutive numbers in the result. That would be [100,101,102]. I do not want the [75], [78], [109] in the result.
from operator import itemgetter
from itertools import groupby
data = [75, 78, 100, 101, 102, 109]
for k, g in groupby(enumerate(data), lambda (i,x):i-x):
print map(itemgetter(1), g)
g in your example is the iterable of elements (e.g., [75], or [100, 101, 102]). If you only want consecutive numbers, it sounds like you're looking to print all
gs where there are greater than one elements in
g [Note,
g is actually an iterable, but we can quickly convert it to a list with
list() for a trivial amount of elements. We'll just need to save the contents, because an element can't be read twice from an iterator]
Try wrapping the
print map(itemgetter(1), g) in an if statement, such as:
x = list(g) if len(x) > 1: print map(itemgetter(1), x)
|
https://codedump.io/share/SWWGVsX5g86T/1/how-to-choose-only-consecutive-numbers-from-a-list-using-python
|
CC-MAIN-2018-09
|
refinedweb
| 179
| 68.1
|
Hi Ethan,
just a note: I got the same WebAppContext error when building from HEAD the
day before yesterday... even two times "maven clean" didn't solve it.
Kind regards,
Daniel
On Tue, Mar 22, 2011 at 9:02 AM, Ethan Jewett <esjewett@gmail.com> wrote:
> Hi Eric,
>
> My current theory is that my home network connection wasn't allowing
> me to access a repository for some reason. I am planning to try it out
> again tonight on a different network, so we'll see what happens.
>
> Thanks for taking the time to think about it :-)
>
> Ethan
>
> On Tue, Mar 22, 2011 at 4:58 AM, ejc <eric.j.christeson@gmail.com> wrote:
> > I'm not sure what kind of prescription I was on, but I was unable to
> > reproduce the error I thought I was getting. Probably the reason I
> > never reported it. I have no idea what is causing your error. I even
> > tried starting with a clean checkout of esme and a clean .m2 and .ivy2
> > -- no issues.
> > Sorry for the red herring.
> >
> > Thanks,
> > Eric
> >
> > On Mon, Mar 21, 2011 at 2:37 AM, Ethan Jewett <esjewett@gmail.com>
> wrote:
> >> Hi Eric,
> >>
> >> Interesting. I had wondered if the issue was Jetty 7 vs. 6 but for
> >> some reason I decided that I was getting Jetty 6. I'll have to check
> >> this out again.
> >>
> >> If you get a chance to test, I also get the WebAppContext error in SBT
> >> when trying to run tests or "sbt jetty-run".
> >>
> >> Cheers,
> >> Ethan
> >>
> >> On Monday, March 21, 2011, ejc <eric.j.christeson@gmail.com> wrote:
> >>> I've run into problems before with sbt. The Jetty version
> >>> specification caused it to pull 7.x where they changed namespaces and
> >>> may have moved some classes. Haven't tried Maven so I can't speak of
> >>> your specific errors. It was a difficult problem because it would
> >>> clear up if I nuked .m2/repos and built something else that would pull
> >>> Jetty 6. I keep forgetting to bring this up and submit my patch.
> >>>
> >>> Thanks,
> >>> Eric
> >>>
> >>> On Sunday, March 20, 2011, Ethan Jewett <esjewett@gmail.com> wrote:
> >>>> For me, "mvn clean test" did the trick. Hopefully it's just me...
> >>>>
> >>>> Ethan
> >>>>
> >>>> On Sun, Mar 20, 2011 at 6:38 PM, Richard Hirsch <
> hirsch.dick@gmail.com> wrote:
> >>>>> What commands exactly as you using? I can try it tomorrow.
> >>>>>
> >>>>> D.
> >>>>> On Sun, Mar 20, 2011 at 12:00 PM, Ethan Jewett <esjewett@gmail.com>
> wrote:
> >>>>>> A little more investigation and I see this during the Maven
build,
> >>>>>> which may be relevant:
> >>>>>>
> >>>>>> [INFO] >>> maven-jetty-plugin:6.1.16:run (default-cli)
@ esme-server
> >>>
> >>>>>> [WARNING] Could not transfer metadata
> >>>>>> org.mortbay.jetty:jetty/maven-metadata.xml from/to Apache Repo
> >>>>>> ():
No
> >>>>>> connector available to access repository Apache Repo
> >>>>>> ()
of type
> >>>>>> legacy using the available factories WagonRepositoryConnectorFactory
> >>>>>>
> >>>>>>
> >>>>>> Could be a problem with my internet connection, actually.
> >>>>>>
> >>>>>> Ethan
> >>>>>>
> >>>>>> On Sun, Mar 20, 2011 at 11:57 AM, Ethan Jewett <esjewett@gmail.com>
> wrote:
> >>>>>>> Hi all,
> >>>>>>>
> >>>>>>> I'm getting an error while compiling tests under Maven &
SBT:
> >>>>>>>
> >>>>>>>
> /Users/esjewett/svn_repos/esme/trunk/server/src/test/scala/org/apache/esme/lib/MsgParseTest.scala:33:
> >>>>>>> WebAppContext is not a member of org.mortbay.jetty.webapp
> >>>>>>> [error] import org.mortbay.jetty.webapp.WebAppContext
> >>>>>>>
> >>>>>>> This occurs in a few places and as far as I can tell the
> WebAppContext
> >>>>>>> import is in place.
> >>>>>>>
> >>>>>>> Anyone else having the issue?
> >>>>>>>
> >>>>>>> Thanks,
> >>>>>>> Ethan
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>
> >>
> >
>
--
Gruss / Kind regards,
*Daniel Koller *
Jahnstrasse 20 * 80469 München
+49.89 68008308 (landline) * +49.163.6191979 (mobile)
|
http://mail-archives.apache.org/mod_mbox/esme-dev/201103.mbox/%3CAANLkTikYHeXL4_wN-z6KtKMeS+-AqmQhP3heae4sSEnH@mail.gmail.com%3E
|
CC-MAIN-2015-22
|
refinedweb
| 585
| 68.26
|
package com.as400samplecode; import java.io.File; import java.io.IOException; public class CreateFile { public static void main(String[] args) { try { File myFile = new File("data/newFile.txt"); if (myFile.createNewFile()){ System.out.println("File is created!"); }else{ System.out.println("File already exists."); } } catch (IOException e) { e.printStackTrace(); } } }
All one can think and do in a short time is to think what one already knows and to do as one has always done!
How to create a file in Java if one doesn't exists
The File.createNewFile() method creates a new, empty file named by this abstract pathname if and only if a file with this name does not yet exist. This methods return a true value if the file is created successfully and false if the file already exists or the operation failed.
NO JUNK, Please try to keep this clean and related to the topic at hand.
Comments are for users to ask questions, collaborate or improve on existing.
|
https://www.mysamplecode.com/2012/05/java-createfile-if-not-exists.html
|
CC-MAIN-2019-39
|
refinedweb
| 163
| 60.21
|
Forum:73h Quote Crew r00lz!
From Uncyclopedia, the content-free encyclopedia
A Slight Misunderstanding
It looks as though the Making up Quotes articles have again survived the fiery trials of VFD. In the light of this, perhaps we should realize a few things:
- People are going to keep making articles like this, and other people are going to keep voting in favor of them.
- Alot of people on Uncyclopedia dislike the Making up Quotes pages alot, and the arguing to protect the pages seems to be happening more than the actual making up of the fake quotes.
- Wikipedia didn't want their articles cluttered with quotations, so they started a new project. WikiQuote.
So, this line of thought makes me come up with the following ideas:
- Perhaps the "Making up Quotes" craze needs to get it's own wikicity somehow.
- Or, perhaps we should form some kind of group having to do with protecting good quotes pages and getting rid of crappy ones.
Any thoughts? --Nerd42eMailTalkUnMetaWPediah2g2 19:56, 1 March 2006 (UTC)
- How about a namespace? "Quotes:" sounds good to me. --[[User:Nintendorulez|Nintendorulez | talk]] 20:04, 1 March 2006 (UTC)
heres the main page: --Da, Y?YY?YYY?:-:CUN3 NotM BLK |_LG8+::: 07:49, 11 February 2006 (UTC)
Post-Exodus
Main page now at Unquotable:Main Page. All "Making up quotes" pages have been moved except for Making up Oscar Wilde quotes... Proposed namespace policy and style manual should be posted within 48 hours, but if you want to get started on new pages, I guess now's as good a time as any, folks! c • > • cunwapquc? 07:16, 2 March 2006 (UTC)
- I like this idea. It should make everyone happy - those against it, and those for it. I see Making up (?:Matirx|Winston Churchill|Feb 3|Sun-Tzu|.*) Quotes pages still in the default name space. Are they going to be moved too? --User:Keithhackworth/sig2
- Those are all just redirects now (except for Oscar's, since I haven't moved him yet). I'd suggest keeping them in place, perhaps indefinitely, if only to avoid breaking external links and what-not. By the way, I put three proposed logos up for consideration at Uncyclopedia:Logos... Y'all feel free to add/suggest others, of course. (I'm not so sure any of them are all that great!) c • > • cunwapquc? 14:05, 2 March 2006 (UTC)
hey awesome. I'm happy --Nerd42eMailTalkUnMetaWPediah2g2 17:18, 2 March 2006 (UTC)
Not So Happy After All, It Would Seem
- wait a second, whered the Matrix quotes go? --Nerd42eMailTalkUnMetaWPediah2g2 17:23, 2 March 2006 (UTC)
- Red link: Making up Matrix quotes
- Blue link: Making up Matrix Quotes
Whoever deleted the page should be banned. That's what I get for leaving redirect pages lying around, right? --Nerd42eMailTalkUnMetaWPediah2g2 17:25, 2 March 2006 (UTC)
agreed where is it people.... --Da, Y?YY?YYY?:-:CUN3 NotM BLK |_LG8+::: 07:49, 11 February 2006 (UTC)
- Uncyclopedia:Pages_for_deletion/archive28#Making_up_Matrix_quotes --—rc (t) 04:58, 3 March 2006 (UTC)
- Oh crap, I thought it had made it. More people voted afterwards I guess. this site sux --Nerd42eMailTalkUnMetaWPediah2g2 15:46, 3 March 2006 (UTC)
- Look, I know this is difficult to face, but if Making up Matrix Quotes was that good, people would have voted to keep it. This wasn't one admin deciding to kill a page, it was a majority of users voting to axe it. --—rc (t) 17:15, 3 March 2006 (UTC)
- I realize the workings of the democratic process. However, I think:
- A large number of VFD articles go by which people who would like to protect pages don't hear about in time. (happens all the time on alot of wikis)
- Deletion of content should be a last resort, not the primary method of quality control on any type of user-generated content site.
- Some people vote on pages not based on the quality of the humor, but because they don't like the subject. A comprehensive SPOV encyclopedia should cover all possible subjects.
- Sprinkle a few terms like "systamic bias" and crap ... and this list might make sombody sound almost intelligent on Wikipedia. --Nerd42eMailTalkUnMetaWPediah2g2 22:58, 3 March 2006 (UTC)
- Nerd, shut your pie hole and stop whining about how your article got deleted. *pulls out banhammer*-- 04:22, 4 March 2006 (UTC)
- 1. That's the nature of the wiki. MUMQ was up on VFD for well over a week. If people don't notice, what else can we do to make them notice that doesn't involve an exorbitant amount of time or energy?
- 2. Nerd, you've been around here long enough to know that that's not the reality here. We'd be drowning in "well, me and my two friends think it's funny" pages if deletion was a last resort. Uncyclopedia would be a mess.
- 3. An humor site should not cover every subject if some of the subjects are covered in unfunny ways. Kind of contradicts the purpose. I'm sure some people do vote on pages because they don't like the subject. I'm also sure that some people vote to keep or feature pages just because they do like the subject. If you expect voting, especially on a humor site, to be completely objective, you'll be disappointed.
- For the record, I've neither read nor voted on the Matrix quotes page. --—rc (t) 04:26, 4 March 2006 (UTC)
On people not noticing, I'm not sure what we can do about it, and hoping for something better is thought of, because other people have noticed the same problem. I was hoping / thinking the voting ought to be objective, since I try to vote objectively myself. The policies around here are so self-contradictory they're not even funny. --Nerd42eMailTalkUnMetaWPediah2g2 20:40, 7 March 2006 (UTC)
- Voting is not going to be objective for two reasons.
- 1. People have biases.
- 2. Humor itself is subjective.
- Of course it'd be nice if #1 wasn't an issue, but realistically, it is and will always be an issue.
- And I don't know what policies you're referring to. --—rc (t) 04:50, 8 March 2006 (UTC)
- He's got you there Nerd. The very act of voting is the result of generating an opinion. Opinion de facto generates bias. You cna't "Objectively vote" That's bullshit and you know it.-- 17:46, 8 March 2006 (UTC)
- Also, becuase I'm feeling argumentatively pugilistic. gwax is noting that there are very few votes for entries (i.e. 2-3), and, as such, it's unfair to decide whether or not they should go. MUMQ had 13 votes, and lost 8-5. Again, nerd, stop whining becuase it was your page.-- 17:50, 8 March 2006 (UTC)
- Um, referring to users as "whining" and threatening "banhammer" every time they say something with which you disagree isn't very constructive. Sorry. As for the question of all the deletions that've been happening lately? IMHO the deleted texts should be made available on request, with the one restriction that they not be reposted in their original format to main article space without taking the issue to VFD for a vote.
- It may also be a good idea to check "what links here" before deleting, as otherwise pages from a series template like {{cars}} or {{UStates}} will continue to end up on Special:Wanted pages with some artificially-high link count. Editing the template and all affected pages may be necessary to get such a page into "not wanted" status. --Carlb 03:55, 9 March 2006 (UTC)
- Believe me Carl, I only leave such choice words for Nerd and his ilk. You've seen me do my civil, helpful thing. In fact, it's usually what I do for most users. It's in my talk page and its archives, and many of my dump postings. Granted, some of it isn't completely friendly, but at least it's professional. In my own experience, I've found Nerd to be difficult to work with, and frankly, I think this is the same song and dance he's given time and again. --)
Top-Quotes and Templates
- A (&action=)purge fixes most problems of links and templates. *However*, for Special:Wanted and Special:Whatlinkshere, you have to null-edit every page the template appears in (after changing the teplate of course) to get it off those special pages. Same for categories in templates (which is why we can NRV timestamp them, it isn't a bug it is a feature!) --Splaka 04:01, 9 March 2006 (UTC)
- Who said that?--)
- teh not me. I think the only quotes we should keep are wilde, ballmer, bush, and stalin, and ballmer's up to question. "George Bush doesn't care about black people" and "i'm going to fucking kill google" are only funny for so long, guys--the only reason ballmer's on the list is that we all know he's going to say something stupid again. Scythe33 21:09, 9 March 2006 (UTC)
- {{Bushism}} seems to get away with the joke-in-template structure as Bush has said so many foolish things that one can be retrieved randomly at any time with little risk of repetition. Even then, it only gets away with this by relying on the random option/choose algorithm to keep pulling up new bloopers and misquotes and by not being reused on multiple pages. Jokes in templates (in general) don't work due to their static nature; templates are intended to be used and re-used, while jokes tend to lose their flavour (or at least their humour) when retold too many times. The {{Q}} or {{Quote}} model where the text is not 'canned' as part of the template and not re-used may work sometimes. Even there, there are plenty of unfunny quotes out there just because of the assumption that every page in the entire Uncyclopædia just has to have a quote from whomever. An unfunny quote will fall flat regardless of whom is given the misattribution for it, so the only fix is to look at every quote on every page and remove the boring ones. No small task. --Carlb 17:40, 11 March 2006 (UTC)
Proposed Award
Carlb is right - I'd estimate we're talking about at least 2-3,000 pages with top-quotes, probably more, and at least half of those quotes are el suckola. If people were only doing it because they thought they were supposed to, then that's easily corrected - but as for the existing "damage," maybe it would be enough to just eliminate whatever intimidation factor there might be in removing stooopid top-quotes in general? There seems to be little stigma attached to NRV'ing pages, for example, so maybe something as simple as an award might help get the ball rolling:
Any suggestions/objections/etc.? c • > • cunwapquc? 18:52, 11 March 2006 (UTC)
- Cool template. Although someone already made one, it's on my userpage if you want a glimpse at it. FreeMorpheme 09:04, 1 August 2006 (UTC)
Here's What User:Nerd42 Thinks
I think U R all nuts --Nerd42eMailTalkUnMetaWPediah2g2 15:34, 24 March 2006 (UTC)
|
http://uncyclopedia.wikia.com/wiki/Forum:73h_Quote_Crew_r00lz!
|
CC-MAIN-2015-35
|
refinedweb
| 1,882
| 71.04
|
I came across this example on Oracle's Java Tutorial describing Deadlock in multi threading scenarios.
So in this example I made following change at line 17 and line 18.
public class DeadLock {
static class Friend {
private final String name;
public Friend(String name) {
this.name = name;
}
public String getName() {
return this.name;
}
public synchronized void bow(Friend bower) {
//My Changes
//System.out.format("%s: %s" + " has bowed to me!%n", this.name, bower.getName()); //Line 17
System.out.println(this.name + ": " + bower.getName() + " has bowed to me!"); //Line 18() {
@Override
public void run() {
alphonse.bow(gaston);
}
}).start();
new Thread(new Runnable() {
@Override
public void run() {
gaston.bow(alphonse);
}
}).start();
}
}
Alphonse: Gaston has bowed to me!
Gaston: Alphonse has bowed back to me!
Gaston: Alphonse has bowed to me!
Alphonse: Gaston has bowed back to me!
There is no difference in whether you use
System.out.print or
System.out.format: they're basically doing the same thing.
The deadlock occurs here if execution of
Gaston.bow(Alphonse) is started between the start of
Alphonse.bow(Gaston) and
bower.bowBack(Alphonse) (or vice versa): the two threads are waiting for a monitor held by the other, and thus deadlock occurs.
This happens inconsistently, because it depends upon a subtle timing issue, depending upon how the threads are scheduled - it is possible that
Alphonse.bow and
bower.backBack(Alphonse) complete before
Gaston.bow is executed, so it looks like there is no deadlock.
The classic way to fix this is to order the lock acquisition, so that the first the same lock is acquired first every time; this prevents the possibility of deadlock:
public void bow(Friend bower) { // Method no longer synchronized. int firstHash = System.identityHashCode(this); int secondHash = System.identityHashCode(bower); Object firstMonitor = firstHash < secondHash ? this : bower; Object secondMonitor = firstHash < secondHash ? bower : this; synchronized (firstMonitor) { synchronized (secondMonitor) { // Code free (*) of deadlocks, with respect to this and bower at least. } } }
(*) It's not quite guaranteed to be deadlock free, since
System.identityHashCode can return the same value for distinct objects; but that's reasonably unlikely.
|
https://codedump.io/share/1AUA5OxbxTYK/1/mutithreading-with-systemoutformat-and-systemoutprintln
|
CC-MAIN-2017-34
|
refinedweb
| 344
| 53.78
|
We are going to use Binary Tree and Minimum Priority Queue in this chapter. You can learn these from the linked chapters if you are not familiar with these.
Huffman code is a data compression algorithm which uses the greedy technique for its implementation. The algorithm is based on the frequency of the characters appearing in a file.
We know that our files are stored as binary code in a computer and each character of the file is assigned a binary character code and normally, these character codes are of fixed length for different characters. For example, if we assign 'a' as 000 and 'b' as 001, the length of the codeword for both the characters are fixed i.e., both 'a' and 'b' are taking 3 bits.
Huffman code doesn't use fixed length codeword for each character and assigns codewords according to the frequency of the character appearing in the file. Huffman code assigns a shorter length codeword for a character which is used more number of time (or has a high frequency) and a longer length codeword for a character which is used less number of times (or has a less frequency).
Since characters which have high frequency has lower length, they take less space and save the space required to store the file. Let's take an example.
In this example, 'a' is appearing 51 out of 100 times and has the highest frequency, 'c' is appearing only 2 out of 100 times and has the least frequency. Thus, we are assigning 'a' the codeword of the shortest length i.e., 0 and 'c' a longer one i.e., 1100.
Now if we use characters of fixed length, we need 100*3 = 300 bits (each character is taking 3 bit) to represent 100 characters of the file. But to represent 100 characters with the variable length character, we need 51*1 + 20*3 + 2*4 + 3*4 + 9*3 + 15*3 = 203 bits (51*1 as 'a' is appearing 51 out of 100 times and has length 1 and so on). Thus, we can save 32% of space by using the codeword for variable length in this case.
Also, the Huffman code is a lossless compression technique. You can see that we are not losing any information, we are just using a different way to represent each character.
One general doubt which comes here while reading this is "why we are using longer length codeword for characters which have less frequency, can't we use a shorter length for them too? E.g.- couldn't we use code of 3-bit long code to represent 'c' instead of 4?".
The answer is no and this is due to the prefix code. We are going to discuss prefix code in a while but let's first discuss how storing, encoding and decoding is done.
Storing, Encoding and Decoding
We basically concatenate characters while storing them into a file. For example, to store 'abc', we would use 000.001.010 i.e., 000001010 using a fixed character codeword. Now for the decoding, we know that all of our characters are 3 bits long, so we would break the code for every 3 bits and we can easily get 000, 001 and 010 which can be translated back to 'abc'.
Now, we can't use any specific length which can separate our character if we are using variable length codeword and to simplify decoding the codeword back to characters, we use prefix codes.
Prefix Codes
In variable length codeword, we only use such code which are not the prefix of any other character and these codes is known as prefix codes. For example, if we use 0, 1, 01, 10 to represent 'a', 'b', 'c' and 'd' respectively (0 is prefix of 01 and 1 is prefix of 10), then the code 00101 can be translated into 'aabab', 'acc', 'aadb', 'acab' or 'aabc'. To avoid this kind of ambiguity, we use prefix codes.
Using prefix codes made decoding unambiguous. In the above example, we have used 0, 111 and 1100 for 'a', 'b' and 'c' respectively. None of the code is the prefix of any other codes and thus any combination of these codes will decode into unique value. For example, if we write 01111100, it will uniquely decode into 'abc' only. Give it a try and try to decode it into something else.
Now, we know what is Huffman code and how it works. Let's now focus on how to use it.
Implementing Huffman Code
Let's first look at the binary tree given below.
We construct this type of binary tree from the frequencies of the characters given to us and we will learn how to do this in a while. Let's focus on using this binary tree first and then we will learn about its construction.
Let's first focus on the decoding process.
Decoding
One thing to notice here is that all the characters are on the leaves of the tree and to get the codeword for any character, we start from the root of the tree and proceed to that character. Now, if we move right from any node, we interpret that movement as 1 and if left, then 0. This is also indicated on the branch of the binary tree in the picture given above. So, we move from root to the leaf containing that character and combining the 0s and 1s of each movement, we get the codeword of the character. This is described in the picture given below.
In this way, we can get the code of each character.
We can proceed in a similar way with any code given to us to decode it. For example, let's take a case of 01111100.
We will start from the root and since the first number is 0, so we will move left. By moving left, we encountered a character 'a' and thus the first character is a.
Now, we will again start from the root, we have 1 in the code, so we will move right.
Since we have not reached any of the leaves, we will continue it. The next number is also 1 and so we will again move right.
Still, we have not reached the leaf, so continuing, the next number is also 1. Moving right this time will give us the character 'b'. Thus, 'ab' is the string we have decoded till now.
Similarly, again starting from the root and moving for the next numbers will make us reach 'c'.
Thus, we have decoded 01111100 into 'abc'.
We now know how to decode for Huffman code. Let's look at the encoding process now.
Encoding
As stated above, encoding is simple. We just have to concatenate the code of the characters to encode them. For example, to encode 'abc', we will just concatenate 0, 111 and 1100 i.e., 01111100.
Now, our next task is to create this binary tree for the frequencies we have.
Construction of Binary Tree for Huffman Code
This is the part where we use the greedy strategy. Basically, we have to assign shorter code to the character with higher frequency and vice-versa. We can do this in different ways and that will result in different trees, but a full binary tree (a tree in which every node has 2 children, except the leaves) gives us the optimal code i.e., using that code will save the maximum space in storing the file.
To construct this tree for optimal prefix code, Huffman invented a greedy algorithm which we are going to use for the construction of the tree.
We start by sorting the characters according to their frequencies.
Now, we make a new node and then greedily pick the first two nodes from the sorted character and make the first node left child of the new node and second node as the right child of the new node. The value of the new node is the summation of the values of the children nodes.
We again sort the nodes according to the values and repeat the same process with the first two nodes.
In this way, we construct the tree for optimal prefix code.
We can see that the depth of a leaf is also the length of the prefix code of the character. So, the depth $d_i$ of leaf i is the length of the prefix code of the character at leaf i. If we multiply it by its frequency i.e., $d_i*i.freq$, then this is the number of bits used by this character in the entire file.
We can sum all these for each character to get the total number of bits used to store the file. $$ \text{Total bits} = \sum_{i}d_i*i.freq $$
This can also be used to prove why full binary tree gives us optimal codes because this summation of depths will be least in a full binary tree.
Now, we know how to construct the tree from their frequencies and then use that tree to know the prefix codes of characters and how to encode and decode. So, let's see the coding implementation for the construction of the tree.
Code for Huffman Code
In the steps of constructing the tree, we can see that we are continuously using the sorted nodes and using the first two elements from it. This feature will be well taken care of if we use minimum priority queue because just by inserting a new node, it will go to a position where the nodes will be in the sorted order and also extracting from a minimum priority queue will give us the nodes with least values first.
We need the frequencies of the characters to make the tree, so we will start making our function by passing the array of the nodes containing the characters to it -
GREEDY-HUFFMAN-CODE(C), C is the array which has the characters stored in nodes.
Our next task is to make a minimum priority queue using these nodes.
min_queue.build(C)
Minimum priority queue will automatically add these nodes in a position such that they are in the sorted order.
Now, we need to extract the first element from the queue and set it as the left child of a new node.
n = min_queue.length
z = new node
z.left = min_queue.extract()
Similarly, we need to set the right child of this new node to the element we will extract next.
z.right = min_queue.extract()
Now, the value of this new node will be the summation of the values of its children.
z.freq = z.left.freq + z.right.freq
After this, our queue and the node z will like this:
Our next task is to insert this node in the queue. Since the queue is a minimum priority queue, it will add the node to a position where the queue will remain sorted.
min_queue.insert(z)
We need to repeat this process until the length of min_queue becomes 1 i.e., only the root is stored in the queue.
Also, we are extracting two nodes and adding one node to the queue in each step, thus making an iteration from 1 to n-1 to achieve the same.
while min_queue.length > 1
z.left = min_queue.extract()
z.right = min_queue.extract()
...
At last, we will extract the root of the tree from the queue and return it.
while min_queue.length > 1
...
return min_queue.extract()
GREEDY-HUFFMAN-CODE(C) min_queue.build(C) while min_queue.length > 1 z = new node z.left = min_queue.extract() z.right = min_queue.extract() z.freq = z.left.freq + z.right.freq min_queue.insert(z) return min_queue.extract()
- C
- Python
- Java
#include <stdio.h> #include <stdlib.h> // node /* _____ | data| | freq| |_____| left ch./ \right chlid address _____/ \ _____ | data| | data| | freq| | freq| |_____| |_____| */ typedef struct node { int frequency; char data; struct node *left; struct node *right; }node; int heap_array_size = 100; // size of array storing heap int heap_size = 0; const int INF = 100000; //function to swap nodes void swap( node *a, node *b ) { node t; t = *a; *a = *b; *b = t; } /* function to print tree */ void inorder(struct node *root) { if(root!=NULL) // checking if the root is not null { inorder(root->left); // visiting left child printf(" %d ", root->frequency); // printing data at root inorder(root->right);// visiting right child } } /* function for new node */ node* new_node(char data, int freq) { node *p; p = malloc(sizeof(struct node)); p->data = data; p->frequency = freq; p->left = NULL; p->right = NULL; return p; } //function to get right child of a node of a tree int get_right_child(int index) { if((((2*index)+1) <= heap_size) && (index >= 1)) return (2*index)+1; return -1; } //function to get left child of a node of a tree int get_left_child(int index) { if(((2*index) <= heap_size) && (index >= 1)) return 2*index; return -1; } //function to get the parent of a node of a tree int get_parent(int index) { if ((index > 1) && (index <= heap_size)) { return index/2; } return -1; } /* Functions taken from minimum priority queue */ void insert(node A[], node* a, int key) { heap_size++; A[heap_size] = *a; int index = heap_size; while((index>1) && (A[get_parent(index)].frequency > a->frequency)) { swap(&A[index], &A[get_parent(index)]); index = get_parent(index); } } node* build_queue(node c[], int size) { node* a = malloc(sizeof(node)*heap_array_size); // a is the array to store heap int i; for(i=0; i<size; i++) { insert(a, &c[i], c[i].frequency); // inserting node in array a(min-queue) } return a; } void min_heapify(node A[], int index) { int left_child_index = get_left_child(index); int right_child_index = get_right_child(index); // finding smallest among index, left child and right child int smallest = index; if ((left_child_index <= heap_size) && (left_child_index>0)) { if (A[left_child_index].frequency < A[smallest].frequency) { smallest = left_child_index; } } if ((right_child_index <= heap_size && (right_child_index>0))) { if (A[right_child_index].frequency < A[smallest].frequency) { smallest = right_child_index; } } // smallest is not the node, node is not a heap if (smallest != index) { swap(&A[index], &A[smallest]); min_heapify(A, smallest); } } node* extract_min(node A[]) { node minm = A[1]; A[1] = A[heap_size]; heap_size--; min_heapify(A, 1); node *z; // copying minimum element z = malloc(sizeof(struct node)); z->data = minm.data; z->frequency = minm.frequency; z->left = minm.left; z->right = minm.right; return z; //returning minimum element } // Huffman code node* greedy_huffman_code(node C[]) { node *min_queue = build_queue(C, 6); // making min-queue while(heap_size > 1) { node *h = extract_min(min_queue); node *i = extract_min(min_queue); node *z; z = malloc(sizeof(node)); z->data = '\0'; z->left = h; z->right = i; z->frequency = z->left->frequency + z->right->frequency; insert(min_queue, z, z->frequency); } return extract_min(min_queue); } int main() { node *a = new_node('a', 42); node *b = new_node('b', 20); node *c = new_node('c', 5); node *d = new_node('d', 10); node *e = new_node('e', 11); node *f = new_node('f', 12); node C[] = {*a, *b, *c, *d, *e , *f}; node* z = greedy_huffman_code(C); inorder(z); //printing tree printf("\n"); return 0; }
Analysis of Huffman Code
By using the min-heap to implement the queue, we can perform each operation of the queue in $O(\lg{n})$ time. Also, the initialization of the queue is going to take $O(n)$ time i.e., time for building a heap.
Now, we know that each operation is taking $O(\lg{n})$ time and we have already discussed that there are a total of n-1 iterations. Thus, the total running time of the algorithm will be $O(n\lg{n})$.
With this, we are going to finish the section of greedy algorithms. Next, we are going to learn some graph algorithms.
|
https://www.codesdope.com/course/algorithms-huffman-codes/
|
CC-MAIN-2022-40
|
refinedweb
| 2,594
| 69.01
|
perlmeditation Abigail-II Last week, hakkr posted some coding guidelines which I found to be too restrictive, and not addressing enough aspects. Therefore, I've made some guidelines as well. These are my personal guidelines, I'm not enforcing them on anyone else. <ol> <li>Warnings SHOULD be turned on. <p> Turning on warnings helps you finding problems in your code. But it's only useful if you understand the messages generated. You should also know when to disable warnings - they are warnings after all, pointing out potential problems, but not always bugs. </p> <readmore> <li>Larger programs SHOULD use strictness. <p> The three forms of strictness can help you to prevent making certain mistakes by restricting what you can do. But you should know when it is appropriate to turn off a particular strictness, and regain your freedom. </p> <li>The return values of system calls SHOULD be checked. <p> NFS servers will be down, permissions will change, file will disappear, disk will fill up, resources will be used up. System calls can fail for a number of reasons, and failure is not uncommon. Programs should never assume a system call will succeed - they should check for success and deal with failures. The rare case where you don't care whether the call succeeded should have a comment saying so. <p> All system calls should be checked, including, but not limited to, <code>close</code>, <code>seek</code>, <code>flock</code>, <code>fork</code> and <code>exec</code>. </p> <li>Programs running on behalf of someone else MUST use tainting; Untaining SHOULD be done by checking for allowed formats. <p> Daemons listening to sockets (including, but not limited to CGI programs) and suid and sgid programs are potential security holes. Tainting can help securing your programs by tainting data coming from untrusted sources. But it's only useful if you untaint carefully: check for accepted formats. </p> <li>Programs MUST deal with signals appropriately. <p> Signals can be sent to the program. There are default actions - but they are not always appropriate. If not, signal handlers need to be installed. Care should be taken since not everything is reentrant. Both pre-5.8.0 and post-5.8.0 have their own issues. </p> <li>Programs MUST deal with early termination appropriately. <p> <code>END</code> blocks and <code>__DIE__</code> handlers should be used if the program needs to clean up after itself, even if the program terminates unexpectedly - for instance due to a signal, an explicite <code>die</code> or a fatal error. </p> <li>Programs MUST have an exit value of 0 when running succesfully, and a non-0 exit value when there's a failure. <p> Why break a good UNIX tradition? Different failures should have different exit values. </p> <li>Daemons SHOULD never write to STDOUT or STDERR but SHOULD use the syslog service to log messages. They should use an appropriate facility and appropriate priorities when logging messages. <p> Daemons run with no controlling terminal, and usually its standard output and standard error disappear. The syslog service is a standard UNIX utility especially geared towards daemons with a logging need. It allows the system administration to determine what is logged, and where, without the need to modify the (running) program. </p> <li>Programs SHOULD use Getopt::Long to parse options. Programs MUST follow the POSIX standard for option parsing. <p> Getopt::Long supports historical style arguments (single dash, single letter, with bundling), POSIX style, and GNU extensions. Programs should accept reasonable synonymes for option names. </p> <li>Interactive programs MUST print a usage message when called with wrong, incorrect or incomplete options or arguments. <p> Users should know how to call the program. </p> <li>Programs SHOULD support the <code>--help</code> and <code>--version</code> options. <p> <code>--help</code> should print a usage message and exit, while<code>--version</code> should the version number of the program. </p> <li>Code SHOULD have an exhaustive regression test suite. <p> Regression tests help catch breakage of code. The regression tests should 'touch' all the code - that is, every piece of code should be executed when running the regression suite. All border should be checked. More tests is usually better than less test. Behaviour on invalid inputs needs to be tested as well. </p> <li>Code SHOULD be in source control. <p> And a code source control tool will take care of keeping track of a history or changes log, version numbers and who made the most recent change(s). </p> <li>All database modifying statements MUST be wrapped inside a transaction. <p> Your data is likely to be more important than the runtime or codesize of your program. Data integrety should be retained at all costs. </p> <li>Subroutines in standalone modules SHOULD perform argument checking and MUST NOT assume valid arguments are passed. <p> Perl doesn't compile check the types of or even the number of arguments. You will have to do that yourself. </p> <li>Objects SHOULD NOT use data inheritance unless it is appropriate. <p> This means that "normal" objects, where the attributes are stored inside anonymous hashes or arrays should not be used. Non-OO programs benefit from namespaces and strictness, why shouldn't objects? Use objects based on keying scalars, like fly-weight objects, or inside-out objects. You wouldn't use public attributes in Java all over the place either, would you? </p> <li>Comment SHOULD be brief and to the point. <p> If you need lots of comments to explain your code, you may consider rewriting it. Subroutines that have a whole blob of comments describing arguments are return values are suspect. But do document invariants, pre- and postconditions, (mathematical) relationships, theorems, observations and other relevant things the code assumes. Variables with a broad scope might warrant comments too. </p> <li>POD SHOULD not be interleaved with the code, and is not an alternative for comments. <p> Comments and POD have two different purposes. Comments are there for the programmer. The person who has to maintain the code. POD is there to create user documentation from. For the person using the code. POD should not be interleaved with the code because this makes it harder to find the code. </p> <li>Comments, POD and variable names MUST use English. <p> English is the current Lingua Franca. </p> <li>Variables SHOULD have an as limited scope as is appropriate. <p> "No global variables", but better. Just disallowing global variables means you can still have a loop variant with a file-wide scope. Limiting the scope of variables means that loop variants are only known in the body of the loop, temporary variables only in the current block, etc. But sometimes it's useful for a variable to be global, or have a file-wide scope. </p> <li>Variables with a small scope SHOULD have short names, variables with a broad scope SHOULD have descriptive names. <p> <code>$array_index_counter</code> is silly; <code>for (my $i = 0; $i < @array; $i ++) { .. }</code> is perfect. But a variable that's used all over the place needs a descriptive name. </p> <li>Constants (or variables intended to be constant) SHOULD have names in all capitals, (with underscores separating words), so SHOULD IO handles. Package and class names SHOULD use title case, while other variables (including subroutines) SHOULD use lower case, words separated by underscores. <p> This seems to be quite common in the Perl world. </p> <li>Custom delimiters SHOULD be tall and skinny. <p> <code>/</code>, <code>!</code>, <code>|</code> and the four sets of braces are acceptable, <code>#</code>, <code>@</code> and <code>*</code> are not. Thick delimiters take too much attention. An exception is made for: <code> q $Revision: 1.1.1.1$</code>, because RCS and CVS scan for the dollars. </p> <li>Operators SHOULD be separated from their operands by whitespace, with a few exceptions. <p>Whitespace increases readability. The exceptions are: <ul><li>Unary <code>+</code>, <code>-</code>, <code>\</code>, <code>~</code> and <code>!</code>. <li>No whitespace between a comma and its left operand. </ul><p> Note that there is whitespace between <code>++</code> and <code>--</code> and their operands, and between <code>-></code> and its operands. </p> <li>There SHOULD be whitespace between an indentifier and its indices. There SHOULD be whitespace between successive indices. <p> Taking an index is an operation as well, so there should be whitespace. Obviously, we cannot apply this rule in interpolative contexts. </p> <li>There SHOULD be whitespace between a subroutine name and its parameters, even if the parameters are surrounded by parens. <p> Again, readability. </p> <li>There SHOULD NOT be whitespace after an opening parenthesis, or before a closing parenthesis. There SHOULD NOT be whitespace after an opening indexing bracket or brace, or before a closing indexing bracket or brace. <p> That is: <code>$array [$key]</code>, <code>$hash {$key}</code> and <code>sub ($arg)</code>. </p> <li>The opening brace of a block SHOULD be on the same line as the keyword and the closing brace SHOULD align with the keyword, but short blocks are allowed to be on one line. <p> This is K&R style bracing, except that we require it for subroutines as well. We do allow <code>map {$_ * $_} @args</code> to be on one line though. </p> <li>No cuddled elses or elsifs. But the <code>while</code> of a <code>do { } while</code> construct should be on the same line as the closing brace. <p> It just looks better that way! <tt>;-)</tt> </p> <li>Indents SHOULD be 4 spaces wide. Indents MUST NOT contain tabs. <p> 4 spaces seems to be an often used compromise between the need to make indents stand out, and not getting cornered. Tabs are evil. </p> <li>Lines MUST NOT exceed 80 characters. <p> There is just no excuse for that. More than 80 characters means it will wrap in too many situations, leading to hard to read code. </p> <li>Align code vertically. <p> This makes code look more pleasing, and it brings attention to the fact similar things are happening on close by lines. Example: <code> my $var = 18; my $long_var = "Some text"; </code> </ol> This is just a first draft. I've probably forgotten some rules. <p> Abigail
|
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=215675
|
CC-MAIN-2014-10
|
refinedweb
| 1,721
| 67.15
|
Missing dependencies
I keep on getting a missing dependency error. From what I can the dependencies are not missing. E.g. I am getting an error indicating that numpy is missing.
How do I fix this?
@Cyferfontein, I have not encountered a similar issue. Which package are you trying to install or use?
Always a good idea to include a full stack trace – that long list of cryptic info is often more revealing than you think.
@mikael, the trace:
Traceback (most recent call last):
File "/private/var/mobile/Containers/Shared/AppGroup/5BFAF8BC-3F1C-41C6-A555-E45537B9DC2E/Pythonista3/Documents/Test/Practice.py", line 2, in <module>
import pandas
File "/private/var/mobile/Containers/Shared/AppGroup/5BFAF8BC-3F1C-41C6-A555-E45537B9DC2E/Pythonista3/Documents/site-packages-3/pandas/init.py", line 19, in <module>
"Missing required dependencies {0}".format(missing_dependencies))
ImportError: Missing required dependencies ['numpy']
This is a very well known problem:
pandas does not work in pythonista. You cannot install it.
You may have also tried to install numpy, which is already installed and cannot be reinstalled. Go to your site-packages-3 and site-packages folder and delete any numpy folders you see.
@JonB, thank you. I don’t get the numpy error anymore.
Any idea when or if one might be able to use pandas in Pythonista?
|
https://forum.omz-software.com/topic/5075/missing-dependencies
|
CC-MAIN-2022-05
|
refinedweb
| 217
| 52.46
|
Apparently the
ur""
u
r
tamil_letter_ma = u"\u0bae"
marked_text = ur"\a%s\bthe Tamil\cletter\dMa\e" % tamil_letter_ma
It's easy to overcome the limitation.
Why don't you just use raw string literal (
r'....'), you don't need to specify
u because in Python 3, strings are unicode strings.
>>>>> marked_text = r"\a%s\bthe Tamil\cletter\dMa\e" % tamil_letter_ma >>> marked_text '\\aம\\bthe Tamil\\cletter\\dMa\\e'
To make it also work in Python 2.x, Add the following Future import statement at the very beginning of your source code, so that in the string literals in the source code become unicode.
from __future__ import unicode_literals
|
https://codedump.io/share/iMuSQE3miYRG/1/raw-unicode-literal-that-is-valid-in-python-2-and-python-3
|
CC-MAIN-2017-51
|
refinedweb
| 106
| 57.06
|
On Fri, Oct 28, 2011 at 02:00:30PM +0100, Dave Scott wrote:
>
> My local checkout shows it using xml-light2 to bridge to xmlm :/ Anyway
> I don't know much about RSS but this one seems a little small (96 LOC in
> my tree). Is it complete enough to use? I have no strong opinion about
> this, except I think it would be polite to avoid claiming the name "rss"
> in the global namespace unless it's complete enough.
>
> > > sexpr
> >
> > Any reason not to use sexplib here instead of another hand-rolled
> > library? sexplib is pretty battle-hardened.
>
> It would be better to use sexplib, it's just a matter of time. First we
> still have to complete the xmlm upgrade and remove xml-light2 :)
As a very first step, how about renaming all the findlib packages to have
'xen-<name>' and then all of these rationalisations could be done as
people get time.
>
> > > stunnel
> >
> > Massively useful to have this released as a standalone library. SSL in
> > OCaml remains a pain, and having 'one good way' to do it (e.g. use
> > stud or stunnel?) would be handy.
>
> It would be nice to have 'one good way' to do it :) Last time I looked
> this code was in a shocking state. We will probably improve it though
> since this one is really important for us. We might want to switch from
> stunnel to stud though ;) Maybe it should have a slightly more generic
> name like 'sslterminator'?
Yeah.
--
Anil Madhavapeddy
_______________________________________________.
|
https://lists.xenproject.org/archives/html/xen-api/2011-10/msg00066.html
|
CC-MAIN-2020-24
|
refinedweb
| 251
| 80.62
|
Testing Groovy Classes with ScalaTest
The other day I read about ScalaTest. I liked the way the tests look like. And I started thinking where to use it. Today a colleague came to me asking me if he can add some tests to our small tool developed in Groovy. This ringed the bell and it turned to a calling. So here is the ultimate solution for integrating Groovy and ScalaTest.
I started with a simple Groovy class org.jboss.qa.SuperUtil (yes, I'm a JBoss Quality Assurance guy ;-)) with a single static method to add two numbers. The class is located in the src directory.
def class SuperUtil { def static add(int a, int b) { return a + b } }
Second, I wrote a org.jboss.qa.test.SuperUtilSuite according to the ScalaTest quickstart.
import org.scalatest._ import org.jboss.qa.SuperUtil class SuperUtilSuite extends FunSuite { test("SuperUtil add should return addition of two numbers") { assert(SuperUtil.add(2, 3) == 5) } }
As you can see, I directly imported the Groovy class.
We had two basic requirement:
- We wanted to be able to run the tests from a Groovy script.
- We wanted to avoid any unnecessary libraries hanging around.
So I started with a Groovy script test.groovy with low hanging Grapes:
@Grapes([ @Grab(group='org.scala-lang', module='scala-library', version='2.9.2'), @Grab(group='org.scalatest', module='scalatest_2.9.1', version='1.8'), ])
I realized that ScalaTest wants only compiled test classes (is there any way around?). To be able to compile the SuperUtilSuite, I must compile SuperUtil first. So things started to get complicated. In Groovy there is a very easy way of using various tools like compilers - AntBuilder. Let's start with defining all the Ant tasks we will need.
def ant = new AntBuilder() ant.taskdef(name: 'groovyc', classname: 'org.codehaus.groovy.ant.Groovyc') ant.taskdef(resource: 'scala/tools/ant/antlib.xml') ant.taskdef(name: 'scalatest', classname: 'org.scalatest.tools.ScalaTestAntTask')
None of these tasks can be defined because none of these resources are on the classpath. I must add Scala compiler to Grapes and make Grapes load the classes with the default system classloader. So the updated Grapes configuration reads like this:
@Grapes([ @Grab(group='org.scala-lang', module='scala-library', version='2.9.2'), @Grab(group='org.scalatest', module='scalatest_2.9.1', version='1.8'), @Grab(group='org.scala-lang', module='scala-compiler', version='2.9.2'), @GrabConfig(systemClassLoader = true) ])
Now I can create target directories and compile Groovy classes.
ant.mkdir(dir: 'target/test-classes') ant.mkdir(dir: 'target/classes') ant.groovyc(srcdir: 'src', destdir: 'target/classes')
Do not forget to specify correct package in your sources. I forgot that and it took me a while to realize why the compiler stores the class directly under target/classes. What a stupid mistake.
Compilation of the Scala test class is not that straight forward. This needs the base Scala library, the ScalaTest library, the Groovy classes to be tested, and the Groovy library (a Groovy class usually depends on GroovyObject for example). The question is how to add the dependencies imported with Grapes and how to add the Groovy library.
Classes imported with Grapes can be obtained with Grape.resolve(). It returns an array of all the libraries imported with @Grapes annotation(s).
There are two ways for obtaining the Groovy library. One can use Grapes as well, but over time, it might be getting another version of Groovy than it is used to run the script and this can lead to ugly exceptions. So I decided to directly lookup groovy-all-*.jar in the $GROOVY_HOME/embeddable directory. I did not need invoke dynamic so I removed the jar file with 'indy' in its name from the classpath.
ant.scalac(srcdir: 'test', destdir: 'target/test-classes', fork: false) { classpath { pathelement(location: 'target/classes') fileset(dir: System.getenv('GROOVY_HOME') + '/embeddable') { include(name: 'groovy-all-*.jar') exclude(name: '*indy*') } Grape.resolve(new HashMap()).each { pathelement(location: new File(it).absolutePath) } } }
Finally, I was able to use the ScalaTest Ant task to run the test. It is almost the same as calling the Scala compiler but it has one more classpath element - the complied test classes.
ant.scalatest(suite: 'org.jboss.qa.test.SuperUtilSuite') { runpath { pathelement(location: 'target/classes') pathelement(location: 'target/test-classes') fileset(dir: System.getenv('GROOVY_HOME') + '/embeddable') { include(name: 'groovy-all-*.jar') exclude(name: '*indy*') } Grape.resolve(new HashMap()).each { pathelement(location: new File(it).absolutePath) } } }
In the end, everything went as expected.
[scalatest] Run starting. Expected test count is: 1 [scalatest] SuperUtilSuite: [scalatest] - SuperUtil add should return addition of two numbers [scalatest] Run completed in 115 milliseconds. [scalatest] Total number of tests run: 1 [scalatest] Suites: completed 1, aborted 0 [scalatest] Tests: succeeded 1, failed 0, ignored 0, pending 0 [scalatest] All tests passed.
Now I could just use some report writers and that's all folks.
You can get complete source code from GitHub.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
|
http://soa.dzone.com/articles/testing-groovy-classes
|
CC-MAIN-2013-20
|
refinedweb
| 844
| 52.05
|
The more people using Volumio2 on a C2 with a USB Audio DAC, the more reports we seem to get about clicks and pops (with kver 3.14.79).
We are talking about various DACs, eg. a USB Waveio, a Parasound Halo Integrated (ESS Sabre32 Reference ES9018K2M, a Chord Mojo and an FX-AUDIO-D-802.
On request, the users tried nrpacks=1 and/or changed cpu affinity.
Nothing seems to be a real fix, though it did improve a little for some.
Eg. the Waveio seems to be ok with changeing cpu affinity, all others still suffer.
As there were reports of Volumio pre-releases not having this these issues, I requested a few of the users to verify an older (pre-release) Volumio version from August
(stilll with kver 3.14.29, compiled on 22/04/2016).
Then some of them indeed reported back that the version with the "old" kernel did not have such problems, the sound appeared to be clean.
After that, I tracked all USB driver commits between version 3.14.29 from 22/04 and 3.14.79 from 08/12 and "stumbled" over:
Code: Select all
Committed by mdrjr/ 24 May 2016 driver: drivers/amlogic/usb/dwc_otg/310/dwc_otg_hcd_queue.c @@ -367,13 +367,13 @@ static int check_periodic_bandwidth(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh) * Max periodic usecs is 80% x 125 usec = 100 usec. */ - max_claimed_usecs = 100 - qh->usecs; + max_claimed_usecs = 125 - qh->usecs; else /* * Full speed mode. * Max periodic usecs is 90% x 1000 usec = 900 usec. */ - max_claimed_usecs = 900 - qh->usecs; + max_claimed_usecs = 1000 - qh->usecs; if (hcd->periodic_usecs > max_claimed_usecs) { DWC_INFO("%s: already claimed usecs %d, required usecs %d\n", __func__, hcd->periodic_usecs, qh->usecs); @@ -724,4 +724,4 @@ int dwc_otg_hcd_qtd_add(dwc_otg_qtd_t *qtd, return retval; } -#endif /* DWC_DEVICE_ONLY */ +#endif /* DWC_DEVICE_ONLY */
To my big surprise, the pops and crackling are reported to be gone for the WaveIO, the Chord Mojo and the Parasound.
The FX-Audio has other issues, as it appears not related.
To me it seems we had regression between kernel Version 3.14.29 (compiled 22/04/2016) and 3.14.79 (compiled 08/12/2016)
What was the reason for the commit listed above and is there anything we can do to help solving the usb audio issue?
-Gé-
|
https://forum.odroid.com/viewtopic.php?p=193827
|
CC-MAIN-2019-35
|
refinedweb
| 373
| 63.49
|
Can't block for a Deferred in Twisted
Despite the existence of the promising
waitForDeferred/
deferredGenerator and the newer
inlineCallbacks, it appears there is no way to block while waiting for a Deferred. Brian Granger described the problem on the Twisted mailing list:
I have a function that returns a Deferred. I need to have the result of this Deferred returned in a (apparently) blocking/synchronous manner:glyph provided the succinct answer (as well as an interesting commentary on using Twisted the wrong way).def myfuncBlocking(): d = myfuncReturnsDeferred() ... result = return resultI need to be able to call this function like:result = myfuncBlocking()The question is how to get the result out of the Deferred() and make it *look* like myfuncBlocking() has blocked.
This issue has been discussed repeatedly - long story short, it's just a bad idea.
Hmmm, maybe learning Twisted will be harder than I thought.
Update 2008-10-20: Marcin Kasperski wrote a good example comparing raw deferreds, deferred generators, and inline callbacks.
Related posts
- Twisted web POST example w/ JSON — posted 2010-08-25
- Quick notes on trying the Twisted websocket branch example — posted 2010-05-24
- Running a Twisted Perspective Broker example with twistd — posted 2008-10-27
- Twisted links — posted 2008-10-21
- Running functions periodically using Twisted's LoopingCall — posted 2008-10-14
You can easily use Queue.Queue to turn a Defrred call into a blocking call:
def block_on(d, timeout=None): q = Queue() d.addBoth(q.put) try: ret = q.get(timeout is not None, timeout) except Empty: raise Timeout if isinstance(ret, Failure): ret.raiseException() else: return ret
make sure you never make blocking calls from the reactor thread though.
|
https://www.saltycrane.com/blog/2008/10/cant-block-deferred-twisted/
|
CC-MAIN-2019-09
|
refinedweb
| 279
| 51.99
|
Hi, guys! So how do you make ropes? As far as I understand there is no "rope object" in Unity.. The derrick of the crane is also able to go up and down. On the end of the derrick there is anothe cylinder that is also interacts with the rope. I think I post a picture to visualize my words.-) I apologise for the quality of the picture, I'm not that good at Photoshop.So how do you make something like this:
Any suggestions?
P.S. I looked up this topic:
and the scripts are greate, but the rope doesn't interact with rigid bodies. Also I tried to use a skinned cylinder, but it also doesn't interact with rigid bodies.
Deleted answer by Jake 2 due to reports of links gone bad and now containing viruses due to domain shifting.
Answer by equalsequals
·
Jul 16, 2010 at 03:11 PM
If you want to make a physics rope, it will take several segments of rigid bodies attached by Joints. Currently your best bet would be the HingeJoint.
I recently worked on a Unity-powered game where we had a chain/rope-like string of objects that trailed behind the character and that was the class we used to tie those rigid bodies together.
I'm going to be honest, this is a pretty intricate task and there isn't really an easy way to implement this - especially the winding/unwinding functionality.
My personal advice is this - fake it as best you can. Unless the entire game is solely about the most realistic crane ever built, just fake it.
Judging by the bottom-left image (stick man "platforming" on the crane) Correct me if I am wrong, but I'm guessing that you are making a platformer of sorts.
When it comes to production you want to look at a cost-per-component ratio. You want the most important features, like your controls and other main game mechanics to have the most time/money invested. If something is rather minute in scope, insignificant in the grand scheme of things, investing so much time as to create absolute real-world physics is going to blow the cost/value of component ratio way out of proportion.
It's really up to you to decide how important any component of the game is, but a rule of thumb I use is how often any one component appears in the game. If you've got 1-2 cranes in a level - not so important. Again, if the crane is the entire game, essentially, disregard what I said, use the HingeJoint class, and I will eat my hat.
In the scenario where it should just be faked, I would do it using animations and coordinated/synced interpolated collision geometry to achieve a "good enough" product.
Sorry for the length, hope this helps.
==
Thanks, equalsequals! I'll try the hinge joint thing. I want rope to be as realistic as possible. Because the level consists of a platform and two cranes, so the cranes(and ropes) are important.
But the top two green "wheels" + the rope between them, can be faked without ruining the actual game as I see it. Just make the "hinge joint" from the tip of the crane and down. Dont make the "roll around barrel" part with real simulation, just fake the look with animation or place the "wheel" inside a box so you can only see it going in through a hole.
Answer by paste120
·
Jul 16, 2010 at 06:24 PM
Physics rope could be created in Unity3 by using the new cloth system. Demo video:
Wow! Very cool feature! But unfortunatly there is not unity3 yet, only beta for the ones who pre-ordered the product. I use indie version.
Ya, I'm sorry. I've upgraded and have access to the beta. It's well worth the upgrade. You would be able to accomplish your idea much easier in 3 with a physics cloth cylinder, but with 2.6 equalsequals is right.
Answer by buestad
·
Nov 29, 2012 at 02:45 PM
PhysX (the buildt in physics engine) is useless for creating ropes/chains using RigidBodys and ConfigurableJoints if you are attaching anything "heavy" to the end. The reason is that PhysX only has an itherative solver. You need a direct solver for this.
I've done some testing using OpenDynamcsEngine and the direct solver (use the world.step() function, not the world.quickStep() which uses the itherative solver). ODE is originally written in C/C++ but there is a port to Java called ODE4j which quickly can be compiled to a C# .dll using IKVM
My tests so far is really promising!
The attaced image shows my setup: The load of the ball is 10 000kg, and the steel rope is 47mm thick whith a densyty of 7000kg/m2 which is close to iron. The crane parts are 3 000kg for the main jib and the king(the cylinder thing) and 2 000kg for the knuckle jib. Both jibs are attached using limitless ODE hinge joints and the hydraulic cylinders which hold everyting up are limited ODE PRJoints (whiche is like a slider and a hinge put together)
it's admirable you did work with ODE in Unity.
it's an interesting observation that the usual ropes everyone uses in PhysX (just get any of the "ropes" kits from the asset store) are not suitable for a weight on the end.
@buestad How did you integrate ODE exactly? It's not working how I tried it :/ Can you help me?
@LucaH007 It's a wile back but I'll try to remember....
I made this at work so I have a layer on top of ODE4j so my implementation is probably a bit different than a clean ODE4j implementation. But it shoud be possible without this layer.
First you need to make a .dll of ODE4j using I$$anonymous$$VM. Download the latest I$$anonymous$$VM and ODE4j. I$$anonymous$$VM expects one or more .jar-files, so if you download the source code of ODE4j I guess you will have to ANT it...
When you have your C# ode4j.dll put it inside the Assets folder together with the I$$anonymous$$VM .dlls
Then you have to start to make a bunch of C# scripts to put on different GameObjects in Unity. The diffetent C# scripts should contain the ODE objects. Start implementing World and Body. Remember that the instance of World has to be created before you can create instances of Body, so you need to implement some kind of system that makses sure that the instance of World is created first, or a callback when it is created. You don't have control over which C# script of which GameObject is started first...
By the way, when using I$$anonymous$$VM you have to set the the API compatibility level to .NET 2.0 (not subset)
@buestad First of all, thanks a lot for your answer! I stuck right in the beginning at converting ODE4j to a jar and then to a dll file... I got an ode4j.dll file but I can't call it in Unity by script... (For your information: I'm beginner ;) )
But I found out that you can use the original ODE library in Unity somehow. ()
Now I don't know how to go on xD
@LucaH007 if you use the original ODE written in c++ you'll have to add some information so that the C# code knows where to find the functions inside dll. I am no expert in c++, C# or dlls in general (by far), so therfore I went through I$$anonymous$$VM which generates C# spesific dlls.
I made a jar of a version of ODE4j which was on my drive. I think it is ode4j 0.12.0-j1.4, and compiled a dll with ikvm-8.1.5717.0. Then I included the dll, together with all dlls of I$$anonymous$$VM (the x86 versions) and made a simple test project in Unity 5.2.2 x86.
I will PM a zip of it to you if I can figure out how :)
@buestad That would be great! Unity ropes leave me desperate :D
I think there is no way to PM via unity forum/answers so just send me the zip to this temporary email: LucaH007@cuvox.de :)
Answer by King_Kurl
·
Aug 27, 2016 at 06:54 AM
Not sure if you already got what you needed but I remember some time ago I was looking for the same thing and stumbled on this script which pretty much turns a line renderer into a rope. I don't remember how well it worked but in case you want to give it a try here's the link.
This works, but unfortunately still suffers from the same issues that all ropes made using PhysX seem to suffer from: the joints can stretch too much and go unstable is you have a mass ratio of more than ~1:10 for what's on the end of the rope. Things get better dramatically when position and velocity iteration count are increased to ~50, however this leads to big performance issues. Good find though! and certainly a solution if you don't need big forces in the rope
UPVOTE !! for most people this answer still is the best answer. they type "make rope unity" in google and arrive here because the title states "how do you make ropes?". and this script is a perfect answer for this question. this script works perfectly even with unity 5.4
This is helpful, but was confusing to get working at first (I didn't have objects far enough apart).
Add this to get a menu item that adds an example to your scene. Then hit play, open Scene panel, and grab "start", and wiggle it!
using UnityEditor;
// ...
[MenuItem("Tools/Setup Rope Example")]
static void CreateExample()
{
var end = new GameObject("end");
var rb = end.AddComponent<Rigidbody>();
rb.constraints = RigidbodyConstraints.FreezeRotationX
| RigidbodyConstraints.FreezeRotationY
| RigidbodyConstraints.FreezeRotationZ;
end.transform.position = Vector3.forward * 10.0f;
var start = new GameObject("start");
rb = start.AddComponent<Rigidbody>();
rb.constraints = RigidbodyConstraints.FreezeAll; // no movement so everything dangles off of it
var rope = start.AddComponent<RopeScript>();
rope.target = end.transform;
}
Answer by daneyuleb
·
Feb 25, 2018 at 02:16 PM
Has anyone got the c# version of this to work?Seems broken.
I drop the script in, attach my target object, and the rope part seems to work--a segmented rope attaches between the anchor object and the target--but... the target object deforms as it moves, squeezing and stretching all over.
Hinge Joints that Don't Limit?
0
Answers
reading forces applied to a rigidbody
1
Answer
Is it possible to make a realistic rope using the interactive cloth?
0
Answers
Distribute distances between configurable joint links
0
Answers
Using hinge joints for rope
1
Answer
|
https://answers.unity.com/questions/22550/how-do-you-make-ropes.html
|
CC-MAIN-2020-16
|
refinedweb
| 1,830
| 71.95
|
I agree with Ufuk.
I'm uncertain if its a good idea to delay the 0.8.0 for these "last minute"
streaming features.
I have the impression that the current master is really stable and very
well tested. If we now bring in some prototypes and last minute features,
we probably end up discussing a 0.8.1 bugfix release in two weeks.
I would prefer a 0.8.0 release asap because it has already been way to long
with the issues unfixed in 0.7.0. Also, if the graduation is going to
happen soon, we'll be unable to release for two weeks or so because INFRA
has to transfer us to new namespace (mailing lists, website domain, release
spaces, ... )
As a general lesson from this discussion, we still have to improve our
communication to get a better mutual understanding of what we are working
on. I'll create a wiki page today that contains a roadmap of the features
we are working on.
On Mon, Dec 8, 2014 at 1:28 PM, Ufuk Celebi <uce@apache.org> wrote:
> On Mon, Dec 8, 2014 at 11:05 AM, Márton Balassi <balassi.marton@gmail.com>
> wrote:
>
> > Ok guys, then I also agree to skip 0.7.1 and go straight for 0.8.0. As
> for
> > the streaming side we would like to finish a couple of features (lambda 8
> > support, type handling rework, filesystems i/o support, at least once
> fault
> > tolerance prototype). I'm confident that we can get most of the things
> that
> > we really want in there done by the end of this week.
> >
>
>
> This sounds like a lot. I can understand that you want as much in a release
> as possible though. ;) Would it make sense to reduce this list to the most
> important changes and keep the rest for 0.8.1?
>
|
http://mail-archives.apache.org/mod_mbox/flink-dev/201412.mbox/%3CCAGr9p8CE-CPj7_s68PNTN873YYGVQT7W7FZHMOR5nGv=w039=w@mail.gmail.com%3E
|
CC-MAIN-2018-30
|
refinedweb
| 311
| 73.88
|
23 August 2011 04:04 [Source: ICIS news]
SINGAPORE (ICIS)--Saudi Arabia's Petro Rabigh restarted its propylene oxide (PO) plant over the weekend following a turnaround that lasted almost four months, market sources said on Tuesday.
The 200,000 tonne/year PO facility, located at Rabigh in ?xml:namespace>
The company had targeted to restart the
Petro Rabigh officials declined to comment on plant operations.
Tight availability of PO imports sent spot prices in Asia surging by about $200/tonne (€140/tonne) over the past four weeks to $2,030-2,090/tonne CFR (cost and freight)
"Any PO cargoes that Petro Rabigh can dispatch to
Prices may soften in the coming weeks when regional supplies normalise, he said.
The Rabigh petrochemical complex includes a 400,000 bbl/day refinery and high olefins fluid catalytic cracker (HOFCC) that produce 1.3m tonnes/year of ethylene and 900,000 tonnes/year of propylene.
The HOFCC resumed production on 31 July.
Petro Rabigh is a 50:50 joint venture between Saudi Aramco and
($1 = €0.70)
Additional reporting by Vikki Shen
|
http://www.icis.com/Articles/2011/08/23/9487115/saudis-petro-rabigh-resumes-production-at-po-facility.html
|
CC-MAIN-2015-06
|
refinedweb
| 180
| 53.41
|
Uncyclopedia:Complaints Department
From Uncyclopedia, the content-free encyclopedia
Take a number
If you feel that something ain't right, take a number and make a fuss about it here. We'll look into and either:
a) laugh ourselves wet,
b) do something about it.
c) ban you for some random amount of time for complaining
You might want to look at Report A Problem in case your moan is more suited to that page.
P.S. Uncyclopedia complaints only. We don't want to hear about your mom's rash.
edit Complaints
edit I am the reincarnation of Hemingway
Can you give us a CHANCE please with writing articles before putting "intensive care unit" notes on all the time. My articles are ongoing, it can take hours of work and really messes with my medicated head when ICU things appear. Satire, isn't just taking the mick it's being subtle. In the last 2 days I've done "Hendrix, "Beach Boys" and "Steven Spielberg" All are long stories which I did with respect for the original people because Steve is a director and not just Jewish, Hendrix threw up and died etc... it's cruel to say the obvious. I just ask Unencylopedia to be nicer to the very KIND people who take time out to make the site work. I know ALL of the writers on here and we will ban Unencyclopedia if you aren't nice to us. Love, Ernie (gunshot) See, not funny.
edit Disturbing
Can someone get rid off the "Horrific rape was really victim's own fault for dressing like a slut" story on the Front page. The article isn't funny or satirical one bit.
- I find it extremely funny. But, if you really think it's offensive... nope, I'm leaving it there. - T.L.B.
WotM, UotM, FPrize, AotM, ANotM, PLS, UN:HS, GUN 04:31, Aug 3
Hmmmm... The LedBalloon is typing very sexually suggestively..... —The preceding unsigned comment was added by 137.166.4.130 (talk • contribs)
- I'm totally an IP! Go me! Woo! 131.137.245.207 07:29, 3 August 2009 (UTC)
Parts of the story are okayish, but some of it does cross the "OMG You did not just go there!" mark
- He's got a point - how does an article that gets -3.5 on VFH end up on the front page. Pup t 08:36, 3/08/2009
- Because it's on a part of Uncyclopedia that anybody can edit. Also, while the IP can bitch about how unfunny something is, that's what VFD is for (and I doubt it'd end up deleted there). If you don't find something funny, congratulations, you don't find something funny. A world where everybody had the same sense of humour wouldn't be funny at all. Also, not all pages have to be ha ha funny. Satire can be uncomfortably funny. That page is social commentary of the satirical variety. This comment is, too. I'm all social 'n' shit. In Social Studies class, I studied myself. It was an easy A. Sir Modusoperandi Boinc! 08:43, 3 August 2009 (UTC)
edit I can't.
Don't you people get it? I can't Be funny and not just stupid! I just can't! I've tried, really really hard, but it just doesn't work. You can't possibly deny me Uncyclopedia just because of my handicap, can you? Please, it's all I have! Don't be so inhuman! --Chay <Contribs UNSOC Also> 22:17, 18 May 2009 (UTC)
edit Master Pain's Problem
Could somebody please look into the discussion linked to the word "Problem"? It seems that Master Pain has something he would like to address to you, but could not do so here for some reasons. This may not be the right place to raise such a query, but he seems desperate.
For the slackers, here's the discussion up till the time I posted here:
Hi!
I was blocked from Uncyclopedia about 2 months ago, and I was hoping someone here could persuade one of the moderators to let me off early, to contribute some stuff for the holidays. Also, I can't seem to type on my own talk page without being blocked again. My user name there is Master Pain (also known as Betty) (don't ask). If anyone can help, please do so.
Thanks!
- George "Skrooball" Reeves 05:33, 17 December 2006 (UTC)
- Erm, your block reason is "01:40, 26 October 2006 Famine blocked "Master Pain (also known as Betty) (contribs)" with an expiry time of 3 months (Continued edit warring. No, you are not "back and benevolent now". You're still being a dick.)". That looks pretty self-explanatory to me. You could try asking in #uncyclopedia, though - if you're polite they might unblock you. --86.141.170.118 20:34, 17 December 2006 (UTC)
- Actually, my computer can't get the IRC thing -- it's a Windows '97! But, really, I kind of got blocked out of the blue. It's not a good story. George "Skrooball" Reeves 21:16, 17 December 2006 (UTC)
- Huh? Windows 97!? How did you get such Operating System? Microsoft website doesn't have anything of this Windows. --Edmundkh 11:06, 18 December 2006 (UTC)
- Maybe he has a partially-functional version of Windows 98. -- Altiris Exeunt 12:56, 18 December 2006 (UTC)
- Hmm... actually I've seen my cousin's brother having a CD of Windows 97, and I've heard of my classmate who used W97 at that time. But anyway, I'm just curious about W97, as there's no such thing in Microsoft Website --Edmundkh 16:19, 18 December 2006 (UTC)
- Sorry, my mistake -- it's a Windows '98. George "Skrooball" Reeves 19:48, 18 December 2006 (UTC)
- But, please; can anyone help me? I'd really like to start contributing again. George "Skrooball" Reeves 21:39, 19 December 2006 (UTC)
- Anyone, please? George "Skrooball" Reeves 01:26, 20 December 2006 (UTC)
- Didn't you see 86.141.170.118's advice? Go to the village dump (there's a link on the very first template you see on this article) and make your complain there. When I complained to them about an error in the Russian Reversal quote of All Your Base Are Belong To Us, they rectified it within a day. State your points there, and if the admins see it fit, they'll probably unblock you. -- Altiris Exeunt 01:50, 20 December 2006 (UTC)
- But I can't access the Village Dump! I have a Windows '98! If I even try to post on my talk page, I get autoblocked! If you have an account there, then, please: convince them to unblock me! George "Skrooball" Reeves 05:15, 20 December 2006 (UTC)
- Well, surely you must be able to go round the library or something? Or an internet cafe? It can't be that hard to get internet access, surely? --86.141.170.118 11:47, 20 December 2006 (UTC)
- I'm actually rather busy with most of my other time. I just want to get unblocked; I've got a couple of contributions that have been brewing in my head, and I'd like to get them down on the screen. The only reason I'm here is because a user called Famine continually blocks me. After four months of continually being accused of trolling by him with no way for me to respond, I'm starting to think he's holding a grudge against me. Please, help; it's the holidays. If you act now, this could be an early Christmas present. George "Skrooball" Reeves 20:34, 20 December 2006 (UTC)
Once again, I apologise if this isn't the right place to do such a thing, but it would be good if you admins take a look into this problem. -- 165.21.154.11 12:52, 21 December 2006 (UTC)
- I've already raised it with the blocking admin here if you wish to make any further pleading I fuggest you go there,although this is his fourth offencse so you had better make it good.--The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 13:31, 21 December 2006 (UTC)
- Master Pain has this to say on Wikipedia:
How? I can't do any more edits -- to any pages. George "Skrooball" Reeves 02:22, 22 December 2006 (UTC)-- 165.21.154.12 07:51, 22 December 2006 (UTC)
- One more thing. Master Pain has been blocked; he cannot plead on Famine's talk page because he is blocked, and I can't do the job for him because Famine's talk page is semi-protected. -- 165.21.154.16 07:55, 22 December 2006 (UTC)
edit Asian male handicap
Hi. I have noted that everything on this article is racist. Editing it won't do much as the title is racist as well. Can someone put this on the VFD? —The preceding unsigned comment was added by The Anonymous (talk • contribs)
- I don't see what's stopping you, but the thing is that I doubt it'll go through VFD. An important thing to note is that sooner or later, you will find something on this site that offends you. (I think it's like rule #3.14159 or somethin'.) —Hinoa KUN (talk) 18:40, 9 December 2006 (UTC)
edit Hey YOU!!
Will admins please see the Russian reversal quote in the article.
Alternatively, you could un-protect the page and let us anonymous users do the job for you. Is this an Uncyclopedia that anyone can edit, or is it an Uncyclopedia that any administrator can edit? -- 165.21.154.108 06:48, 5 December 2006 (UTC)
- I fixed it. Oh, and that page is only protected from new users. Register an account, and 3 magical days later you'll be able to edit it. • Spang • ☃ • talk • 11:34, 5 Dec 2006
No wonder there was that exploding bomb template at the top. Thanks for the clarification, and I am one step closer to being an Uncyclopedian. -- 165.21.154.113 05:36, 6 December 2006 (UTC)
edit A complaint
Hi there. I would like to make a complaint. Thanks. • Spang • ☃ • talk • 02:30, 21 Nov 2006
- Your complaint has been noted, and the brainwashing squad will be on the way shortly. —Hinoa KUN (talk) 06:49, 21 November 2006 (UTC)
edit Anti-Icelandic sentiment
What happened to it? Is the Icelandic propaganda brigade patrolling Uncyclopedia or did it get consumed by the forest fire? Perhaps it wasn't easily funny, but at least it was way better than this article for example. Oh, and it it was deleted, you also might want to delete Anti-Icelandism, which redirected to the article. Don José 00:12, 19 November 2006 (UTC)
- I've restored it to your userspace for you to work on. —rc (t) 00:28, 19 November 2006 (UTC)
edit Napoleon Bonaparte
Um, yeah. I hate to sound like a whiny jerk (even though I am), but for some reason, the so-called featured version of this article that's retrieved when you click the date in the box at the bottom contains a pointless quote about the first image (and not Napoleon) at the top and a really lame joke about the LA Clippers in the middle. I wouldn't complain, except I know that they weren't in the version that was actually featured. Don't know if you can do anything about it, but in any case, complaining makes me feel better. --Kwakerjak 23:24, 6 November 2006 (UTC)
- Just change the featured link bit to point to the right version then. Oh, and duck your head quick, before famine bans you for complaining. • Spang • ☃ • talk • 09:26, 7 November 2006 (UTC)
- I'm scratching my head about why this user would A) complain about something that they could fix, and B) complain here a only a little way below the notice which states that they will probably get banned for some random amount of time for complaining. I really think I'll have to get around to writing my article on Rectalcraniitis.
Sir Famine, Gun ♣ Petition » 11/7 23:00
edit Careers
What's wrong? I'll tell you what's wrong: what's wrong is I'M NOT AN ADMINISTRATOR!!! How do I become one? C'mon, I gotta know!
- Becoming a sysop is a long and involved process. It involves three monkeys, half a gallon of hydrogen chloride, the blood of three seperate virgins, and a rare gem known as the "Seed of Evil." I'd tell you the process, but I was unconcious during half.
- Seriously, though, if you have a knack for detecting crap and reverting vandalism, you're halfway there. Being helpful also helps. Asking to become an admin, though, isn't going to help your case anytime soon. —
Major Sir Hinoa (Plead) (KUN) 21:22, 7 August 2006 (UTC)
- It's true, asking doesn't help at all. Like, I used to always hang out in the IRC with this other dude, who kept begging to be sysopped, and eventually, they gave in and said, "Fine!" and sysopped me instead. It's a crazy world, yo.--<<
>> 00:30, 8 August 2006 (UTC)
- The application process is three hundred and fifty pages long, then you have to mow BENSON's lawn for a month. Then, if you can survive the killer bee swarm, you have to find the Secret Item of Secretness. Crazyswordsman...With SAVINGS!!!! (T/C) 02:09, 5 October 2006 (UTC)
This is how I became admin:
- ME: Hey, rc, can I be an admin?
- RCMURPHY: Ok.
- SAVETHEMOOSES has become an administrator --
» Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 03:17, 31 October 2006 (UTC)
- To be an admin takes a long time and you'll get a bad reputation once you become one. Really all you have to do is write 100 featured articles, revert vandalism, be nice to n00bs, be nice to the already existing admins, write another 100 featured articles, revert vandalism, revert vandalism(again), crorect tpyos, discuss and finally the admins might take a little bit of their time to nominate you and set up a discussion and there still might be people against you(the vandals?). OK, seriously speaking, all you have to do is act like one and contribute often. Plus you also have to be able to face humiliations and weird confrontations. How do I know all this? I have an account on Wikipedia!--Faizaguo 16:50, 11 June 2008 (UTC)
edit Esteban
I started working on the Esteban article...because, well, it blew. But i'm beginning to think the guy who started it didn't exactly have any direction. I'm working on at least fleshing it out into something that is grammatically correct and a little less numbing. We'll see how that works. Any ideas? Bobby Budnick 19:55, 15 June 2006 (UTC)
- Try adding it to Uncyclopedia:Pee Review? —Sir Major Hinoa [TALK] [KUN] 02:29, 16 June 2006 (UTC)
- Thanks, Hinoa4. Yeah...I'm still a n00b, but I feel like I keep getting myself into articles that don't have much reason to be written in the first place. I wanna start a Mario Lopez article, but can't find the resource or time to do it. I'm ranting. Thanks. Bobby Budnick 03:42, 16 June 2006 (UTC)
edit Gae Tae Buggery
A mate of mine wrote a perfectly good version of "Gae Tae Buggery" to fulfil a link in another article about national anthems. The item was deleted and now she is banned from editing it again.
It's probably been deleted by some sasanach who thought it was a personal attack, or perhaps someone who was invited to a "tea party" under false pretenses and now NEVER drinks tea and doesn't like it in the house. Seán 18:30, 25 May 2006 (UTC)
- Gae Tae Buggery has never been deleted, as it has never been created. Get Tae Buggery, however, had been deleted, and I'm assuming that's what you're attempting to refer to. I've reviewed the article, and while it's a little sparse it doesn't merit instant deletion, so I've restored it with a note to spruce it up a bit.
- Also, I've looked over the history, and no one who has edited that page has ever been banned. If your friend is still having problems, please let me know. --Algorithm 02:00, 26 May 2006 (UTC)
- Also also note that Gae Tae Buggery. - with a period at the end - has been created and deleted. The person who created it was blocked for 24 days in August last year for blanking a completely different article, and said user has edited as recently as May 3. —Hinoa KUN (talk) 02:09, 26 May 2006 (UTC)
edit Tom Cruise
After recently editing the Tom Cruise article & tried to save, I got a "someone is already editing this article" thingie. I did the best I could and saved to try to incorporate both edits - you know, togetherness and all... It then looked like an IP address then reverted the article, with a note "YOU DARE DEFY TEH CAPTAIN, CLOWNMAN!?)". The Tom Cruise article is one example where parts of the article are being deleted by IP addresses (some reverse resolving to California!).
I'd like to request that the Tom Cruise article be placed on a "members only" status for edits. Thanks, and Cookies for all! --Jester 04:27, 15 May 2006 (UTC)
- Seconded. Crazyswordsman...With SAVINGS!!!! (T/C) 02:10, 5 October 2006 (UTC)
edit Search
AAAAAAA! The Search is broken! Does anybody know this?!?? How ever shall I find someone to bear with until it's fixed? :-( --DWIII OUN CUN 05:01, 14 May 2006 (UTC)
edit What gives with deletions?
Over the past two days I've had new articles deleted by User:Flammable and believe that it is a little drastic to simply huff a new article without any warning nor explanation.
The latest deletion of Huffing the Snake brought a notation that it was a one liner and failed QA. What is QA? (It isn't even defined or listed in the database) And this article was certainly more than a single line.
I tend to agree with this template which resided on Flammable's page.
How is it that Flammable gets to decide for all the users and readers what is funny or not? Isn't there at least a time period or a vote for deletion process?
--Jax-arrgh 16:40, 12 May 2006 (UTC)
- Flammable is a bit quick on the trigger finger sometimes. I personally would have NRV'd that article (meaning that you have 7 days to fix it up before the grue eats it). It wasn't that funny, though. By the way, crap along the lines of "u suk dikc lololooolololololololo;l" or similar retardation is insta-huff, as are articles that suck and are four lines or less (this isn't absolute; it's just my guideline) and crappy vanity articles. It's up to admin discretion, usually. Also, I have no idea what QA is, either. —Hinoa KUN (talk) 18:43, 12 May 2006 (UTC)
- Yeah, that was proably more of an NRV sort of situation. I have restored the page and moved it to your user space. (User:Jax-arrgh/Huffing the Snake) It needs some work before it is ready for prime time (i.e. the main namespace). ---Rev. Isra (talk) 23:16, 12 May 2006 (UTC)
Thanks, gang for giving me a bit of an idea of what's happening here and for restoring the article to a private space where I can go and play with myself. I'm not sure yet whether or not I want to make this page funny ... I'm almost certain you guys would only laugh at it. --Jax-arrgh 15:45, 13 May 2006 (UTC)
edit I Have A Problem With A User Preventing Me and Anyone Else From Editing Neon Genesis Evangelion
Here is the deal. A week or so ago, I had edited the article called Neon Genesis Evagelion, and added some funny and weird stuff. I even changed one item because breaking a puppy's neck is cruel. I come back a week or so later, and found it changed back the way it was, before I changed and added to it. I think I know who the user is and they are being unfair. They are trying to claim it was vandelism, but editing and adding things is not vandelism.
Articles are allowed to be edited and have things added, am I right? Can something be done about this user? If this isn't the right place for this complaint, please feel free to move it.
- Well, I had a look at a bit of the article history, and to be honest, I kind of agree with So So... Your edits weren't that great. They might also have broken with the general flow of the article (I'm not sure, as I didn't read it all). It appears that So So has adopted the article and watches over its well-being. Though it may seem a bit unfair to you, I grant him the privilege of editing the article as he sees fit. I wouldn't call your contributions "vandalism", mind... So no need to worry about getting banned. Well... Unless you take into account that you actually complained about something, which (as you can read above) is a bannable offence. ;) --⇔ Sir Mon€¥$ignSTFU F@H|VFP|+S 11:36, 30 April 2006 (UTC)
edit Really noob question
Alright, I went to the article Redheads, and finding that the "original" content was painfully lame, I removed some of it. Then, I wrote a whole bunch of my own stuff and added it. An admin got mad about this, and re-inserted the content that I had removed, not without calling my an Asswad. However, with all the stuff that I wrote, the entire gist of the article had changed. The material that the admin re-inserted (which, in my opinion, is still painfully lame) is now completely irrelevant to the rest of the article, and therefore, even more lame. I really think it needs to go. Or someone should take the time to ingratiate the article into one, consistent piece. Should I bring it up with the admin? If so,....how do you find their talk page?
Thanks. --McAtee08 16:07, 2 April 2006 (UTC)
edit Really comment about Newspeak
It's about your "Crimethink" box. Your translation back from Newspeak to Oldspeak isn't correct - "Minitrue" and "Miniluv" are Newspeak words, so the translation should read:
"The contents of this article have been determined by the Ministry of Truth to be of a highly criminal and fallacious nature, and the Ministry of Love is working to ameliorate any harm done by the falsehoods below.
I was surprised to find it's the English you had trouble with, not the Newspeak. =D
- English is a dirty whore and borrowed those words. Soon Newspeak will be ourspeak, and none will know the difference. --KATIE!! 20:04, 28 April 2006 (UTC)
edit Rewrite needed for Gundam?
So what do we do if an article we think is fine is flagged for a rewrite -- and no one does rewrite it? At what point can I just unflag it? (Pretty sure some IP address tagged it, not even someone with an account, and with no explanation as to what they think was lacking).
And specifically, of course, if anyone actually knows their "Mobile Suit Gundam" and can look at that page... aside from a couple bits, is it really deserving of a rewrite? --Iritscen 15:35, 7 March 2006 (UTC)
- If you still care, the rewrite tag has been replaced with a few maintenance tags. If you love it, save it from the beast by wikifying it. --KATIE!! 20:05, 28 April 2006 (UTC)
edit Flowbiscuit
so i'm an uber noob and had my page deleted. it wasn't making headlines, but it certainly wasn't hurting anyone either. it was Flowbiscuit. i'd link to it. bu ' deleted. so moose dood, what's up?
~flow
- It was put on the VFD page and people decided it should be deleted. Your name is Flowbiscuit, and the article was about yourself which qualifies as a vanity page. Your personal page is User:Flowbiscuit, if you would like the contents of the deleted page to put on your personal page, i'll put it on your talk page and you can do as you wish, however actual articles shouldnt be created about yourself. Dont be discouraged though, most noobs would bitch and blank pages instead of filing a complaint if this happened to them, so I wouldn't worry :) -:44, 26 January 2006 (UTC)
Eh, don't bother explaining it - I just banned him for complaining. I don't understand why people keep complaining here - is there a sign which says "bitch and whine" on this page or something? How the hell do people get the idea that they can complain about stuff? Are we ing EMO or something now?
Sir Famine, Gun ♣ Petition » 01:39, 27 January 2006 (UTC)
edit Sirs and Madames, I wish to file a complaint
I find the Tourette's Syndrome article offensive! But not really. Heh, I bet you thought you were going to have to ban me for a minute there, am I right? Huh? HUH?
My complaint be about the quality of articles on Uncyclopedia. Uncyclopedia is meant to be a parody site. Then a satire site. Then a website. However, most of the website is just Random Humour, which is crap (with the exception of the I burning your dog article). I think unfue exe exe exe exe exe exe exe exces, and then a huff. I mean, how is Kakun not permabanned?
Also, on a side note, could one of the admins make MediaWiki:Tagline a little longer to contain something like "From Uncyclopedia, the content-free encyclopedia" to parody Wikipedia, or at least to make it a bit more funny. -:59, 15 Jan 2006 (UTC)
- I am so on your side on this one but I can never seem to get anyone else to agree with me. I wish we could kill 90% of the random crap. --Sir gwax (talk) 23:16, 15 Jan 2006 (UTC)
- I too agree. But scientific measurements of funniness demonstrate that funny exists only inside the peculiar (often deformed and depraved) craniums of individuals and is therefore scientifically unmeasurable. It appears that many who Administer Whuppings to Bad Articles tend to think that someone might find an inhoherent, random article funny and therefore it should not be circumcised at neck level.
All that said, I think the quality is gradually improving.----OEJ 17:27, 17 Jan 2006 (UTC)
Im with OEJ, as more people become aware of the site, more people will get rid of random crap (due to higher voter turnout at such places as VFD) and eventually people will create stuff that is acctually funny (by your definitions), however sometimes random is funny. --Brigadier General Sir Zombiebaron 01:50, 20 January 2006 (UTC)
The problem seems to be the quantity of crap posted every day (several hundred articles some days), the majority of which is complete and utter tripe. Most of the short articles get filtered out, but the longer articles, which are just as un-funny still remain. --Hpesoj 23:35, 20 January 2006 (UTC)
- I agree. I know that different poeple have different sense of humour yeah but 80% of all articles on Uncyc are just complete nonsenses and cr*ps. They are random, the parts don't fit together and make sense and they just don't make people laugh. They are more like a sort of "Ok, Ok, random made-up facts-where's the ramdom article link gone?". I've got some work to do.--Faizaguo 17:01, 11 June 2008 (UTC)
edit Cow tipping
I made an effort to improve the article on cow tipping, and I don't think it should have been deleted. In the future, I guess I should be sure to delete any of those blurbs that are on the page about needing rewrites/the future of the page being in peril? --Overlordzoloft 03:29, 4 Dec 2005 (UTC)
- While Cow tipping was worthless to start with, your edits didn't help it much. It was still very, very deletable. Unfunny, too short, no pictures, and unfunny. Did I mention it wasn't funny? If you rewrite it, and make it a solid, multiple paragraph article, with pictures, plot, and humor, then scrap the rewrite sign. Not that that will save it if it still stucks, but for athstetic reasons. However you spell that.
Sir Famine, Gun ♣ Petition » 22:11, 4 Dec 2005 (UTC)
- Fair enough. Thanks for the reply. --Overlordzoloft 00:21, 5 Dec 2005 (UTC)
- That would be "aesthetic". --muffinmanpoo 02:21, 8 February 2006 (UTC)
edit My Keys
Where the did they go? Which one of you stole them? This is getting ing silly... You guys steal the at least twice a week and I am sick of it. Stop stealing my ing keys. --Dave 21:39, 23 Nov 2005 (UTC)
- Lame excuse for not being here. =) -- T. (talk) 21:42, 23 Nov 2005 (UTC)
- Um, are you sure you didn't lock them in template:cars again? --Carlb 01:46, 23 Dec 2005 (UTC)
edit Brazil
This page has too many jokes about the language. If one doesn't understands Portuguese they're not funny at all. Acid Ammo 16:07, 18 Nov 2005 (UTC)
Actually, I thought about that, but since the rest of the article is good, I don't think it should be deleted. Acid Ammo 16:26, 18 Nov 2005 (UTC)
- Be Italic and/or bring up on Talk:Brazil--The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 17:12, 18 Nov 2005 (UTC)
edit Mighty
Dear tard:
Where did my Marvin Gay page go? It should have been filed for QVfD. --Borbetomagus 16:00, 9 Nov 2005 (UTC)
- Please provide a link to where it should have been so I can look at the deleted history. By-the-way, it's Sir tard to you. --
IMBJR
16:45, 9 Nov 2005 (UTC)
edit I Think You Ought to Know I'm Feeling Very Depressed
Dear Sir: I wish to complain about the fact that my rate of article production has dropped precipitously in the past 3 months. I have been quite discouraged lately by the thousands of incomprehensible and arcane one-to-ten-liner unUncyclopedian submits that have vastly outnumbered my meagre and pathetic attempts at humor &/or humour consisting of fairly lengthy pieces which are jam-packed with content-free content. Permission to shoot the n00bs, sir? --DWIII 03:36, 2 Oct 2005 (UTC) CUN
- Mine too, but stiff upper lip and all that. Now I have an admin stick, I can do non-creative things here to help. Perhaps you should nominate yourself for adminship at the Village Dump? --
IMBJR
10:03, 2 Oct 2005 (UTC)
edit No more complaints
I hate complaints. I say we ban anyone who makes a complaint. --Savethemooses 22:57, 26 Aug 2005 (UTC)
- Think of this page as a trap. If they raise their heads, we got 'em. --
IMBJR
23:12, 26 Aug 2005 (UTC)
- Indeed. Hey, how did you make that Hellfire Admin thingy? --Savethemooses 21:25, 27 Aug 2005 (UTC)
- It's simply a template (User:IMBJR/sig) which is referenced by my user preferences. Instead of using the standard sig, I set it for raw text and with that option you can pretty much use any Wiki code you like. I've experimented with using images in it too. --
IMBJR
10:05, 28 Aug 2005 (UTC)
- Are YOU not making a complaint?--Faizaguo 17:06, 11 June 2008 (UTC)
edit Main Page
My Main page is turnking kinda red and getting swollen with little funny bumps that turn yellow. I've tried punching it, and scratching it, and yelling at it, but it doens't help. Should I blank a bunch of pages to try and fix it?--Flammable CUN 21:30, 11 Aug 2005 (UTC)
That's really weird. My rash is turning into an Uncyclopedia page. --Marcos_Malo S7fc BOotS | Talk 22:53, 13 Aug 2005 (UTC)
- Damn it Marcos, can't you read? It says explicitly at the top of the page not to complain about your rash. Now do you have something relevant to contribute, or are you just wasting our time and disrespecting Flammable with your antics? --Spintherism 03:30, 14 Aug 2005 (UTC)
Yelp, what happened to the 'Do you care...'-section on the main page?? Yiks, hope this hasn't got anything to do with my addition to the 'Bad days to be born on'-page. 84.230.253.72 14:46, 8 March 2006 (UTC)
- I think the two sections (Did You Know and Did You Care) have been combined under the Did You Know title. --—rc (t) 15:10, 8 March 2006 (UTC)
I think that the "random page" button should always go to the same page. I reccomend david hasselhoff.
- I fully support trying this idea out for a day or two to see just how loudly people scream, and for the sheer amusement I could have replying "works for me" to the various "OMG random doesnt work somone fix it pls!!!" posts.
Sir Famine, Gun ♣ Petition » 20:21, 11 May 2006 (UTC)
edit The uncyclopedia page
I still do not understand why the uncyclopedia page is a redirect page. I propose that the uncyclopedia page be made a real page that includes a section called "complaints". This page should be turned into a redirect page that sends people somewhere amusing like this. --JWSchmidt 23:21, 11 Aug 2005 (UTC)
- Uncyclopedia:About now links here. --IMBJR 09:20, 12 Aug 2005 (UTC)
edit Template:CRLH
I swear to god, that ladder is cursed. Revenge of the bad wiki formatting! (Use IE to see it) --Nintendorulez 19:57, 2 Nov 2005 (UTC)
edit This Page
Im annoyed because youre aint doesnt have an apostrophe. I cant believe this hasnt been fixed.
Why can't we all just get along and burn whatever's wrog with te page with a heated power ranger?
edit Woof Page
Why did you delete the 'Woof' article? It was the cutest article in the whole UnCyclopedia... =(
- It got whacked because it was unoriginal, uninspired, and not very funny. While it might have been cute, we're not ing "cute-o-pedia, the encyclopedia for people to paste cute pictures into". I think you may be looking for this. If you really, really need to see woof, you can go here and gaze until your heart is content, because Rcmurphy is a softy and moved it to user-space.
Sir Famine, Gun ♣ Petition » 22:25, 12 March 2006 (UTC)
- Woof was an awesome article and I wish the elitist admins would acknowledge that. I especially lol'd at the 'Joe DiMaggio?' --User:Nintendorulez 15:14, 28 December 2006 (UTC)
edit General disclaimer not
Not general enough! Please generalize. Sj 03:17, 14 Dec 2005 (UTC)
- You are welcome to suggest amendments at the Village Dump. --
IMBJR
17:36, 14 Dec 2005 (UTC)
edit Help!
Okay, so when I log in, not 10 seconds later, I'm logged out again. I've searched and searched to see if I was banned, but as far as I can tell, I wasn't and there's no reason I should be. Username: Filmcom. Help please?
- This is the complaints department. Not technical support. Your message above will therefore be deleted in the very near future. If you'd like to post a complain , hould look something like this:
- I don't know what the hell your problem is - I have a cookie manager on my computer which doesn't allow new cookies, yet when I visit your site which requires a cookie and don't allow it to be set, I get logged out after every page. I'm lodging a formal complaint with wikicities because of your snooty behavior. --).
- Now that's how you frame a proper complaint for this page. None of the "waaah I need help" crap like you posted above. If you're complaining, you need to do it right. ~Sir Famine, Vandal♣er 01:28, 11 April 2006 (UTC)
edit Complaint
Uncyclopedia isn't funny enough. Get it fixed, s, because Uncyclopedia's current state of being unfunny isn't very funny at all. I've seen funnier vandalism on Wikipedia!
- If you don't like it, don't read it! Duuuuuuh. ~ 16:18, 1 June 2006 (UTC)
- Nobody Cares. Crazyswordsman...With SAVINGS!!!! (T/C) 02:08, 5 October 2006 (UTC)
edit WTF!
I had put up an article about John Coffee and it was deleted. I can't understand why, since I wasn't given an explaination.
- If you go to the red link and click the huffed link, you'll see who and why and can ask them (or just see the note at the top of the Main_Page). Uncyclopedia has far too many deletions for personal explanations. --Splaka 05:14, 16 July 2006 (UTC)
edit File:Drunk Homosexual Hanzek.jpg
While this may even be true, the material is libellous.
- I do not see what the problem is, as that image does not exist. Also, does anybody edit this page anymore? Ж Kalir, Crazy Indie Gamer (missile ponies for everyone!) 17:08, 3 March 2008 (UTC)
- Chiefly because nobody cares about this page, nobody edits it. --Nobody 16:45, 4 March 2008 (UTC)
edit Complaint
Capercorn hearby registers a complaint. So there it is. --Capercorn FLAME! what? UNATO OWS 02:46, 12 July 2007 (UTC)
- Your complaint has been noted. A team has been dispatched to your home for legal purposes. Ж Kalir, Crazy Indie Gamer (missile ponies for everyone!) 17:08, 3 March 2008 (UTC)
edit Another Compalint
The grenade attached to my number will not explode! This is where I get my troop grenades! We're gonna have to use the C4 and the Claymore mines then... --Lt. High Gen. Grue The Few The Proud, The Marines 02:57, 12 July 2007 (UTC)
- So that's you that's been stealing them all? You best run, boy, because I got a krybiard with your name on it! Ж Kalir, Crazy Indie Gamer (missile ponies for everyone!) 17:08, 3 March 2008 (UTC)
I don't like non-truthful information.
- Yeah, well Vincent Price doesn't either! And y'know what? HE'S DEAD! Strangled himself to death, he did! And then rose as a lich to bring fear and misery to the commoners! Or maybe that's Strom Thurmond. It's easy to get the two mixed up. Ж Kalir, Crazy Indie Gamer (missile ponies for everyone!) 17:08, 3 March 2008 (UTC)
edit Complaint - Correct articles
People are actually posting true articles that contain no misinformation, which seems to be opposite to the spirit of UnNews. The page in questions is [[1]]. The posters in question are Hyperbole, Dr. Skullthumper and Dexter. "Experienced editor" be damned. My corrects may not be the best, but are certainly funnier than the truth. This isn't a personal blog, is it?
edit My diatribe
edit Still waters
I was about to create a forum on this but I didn't really want a public debate, so I thought I'd come in here instead, as nobody reads these.
One of the reasons I like working in Uncyclopedia is that it's open source to the nth degree, which means that as soon as I put an article up on Uncyclopedia then somebody comes along and adds in the word "gay" 50 times and thinks it's funny. But the joy of this is I can just go straight back in there and revert it. And then do it again. And then do it again.
The point on all this though is that being as open source as it is anybody can come in here and do whatever the hell they want. We get a lot of opinion in here, but we get a lot of counter opinion, and as such it actually shows a form of free speech in action.
The same unfortunately can't be said for Still Waters and Chonarion. I have no issue with either of these blogs, except one. They are blogs. They're under the complete control of one individual. Okay, two, but you know what I mean.
I can understand keeping these around as an historical link for Uncyclopedia - it's human nature to wonder "Where did this all come from? Why are we here? I wonder what's for lunch?" But for these to be displayed on every single page seems to be a pure vanity thing, and considering we have an anti-vanity policy and an anti-spam policy, this seems to be a little... ummm... vain spammish.
Although I'd love to keep these around, can we limit them to the main page and a few other selectd places around? Pup t 23:56, 22/07/2009
edit This page
Why isn't there a + thingy at the top of the page next to the edit button? Pup t 23:56, 22/07/2009
- I don't know. Sir Modusoperandi Boinc! 02:12, 23 July 2009 (UTC)
edit Creating articles
Why does Beginner's_Guide/Creating_articles only give the options of the main space and undictionary. Undictionary seems to be a little passé now, and we have UnNews, UnTweet, UnNews audio, and a bunch of other UnThings that I can't think of right at the moment. Given that this is supposed to guide people into the
bowels depths of Uncyclopedia, shouldn't we have a few more options in here? Pup t 23:56, 22/07/2009
- Have you considered...oh, I don't know...adding them there? Sir Modusoperandi Boinc! 00:42, 23 July 2009 (UTC)
- Yes, and I was on the verge of doing it myself, but, as you can see from above, I couldn't remember all of them. That, and as this is the intro for n00bs into Uncyclopedia I thought it better to be done by an admin or, failing that, by someone who has been here longer and has had to deal with n00bs along the way. Pup t 01:25, 23/07/2009
- I'll do it later. MegaPleb • Dexter111344 • Complain here 01:26, 23 July 2009 (UTC)
- Thanks - did either of you manage to read the two other "complaints" I had above this one? Pup t 01:51, 23/07/2009
- Well, I think I can add one of those whatchamacallits, but the Stillwaters and Chron issue is out of my hands. They are both A-holes. MegaPleb • Dexter111344 • Complain here 01:54, 23 July 2009 (UTC)
- And I can't add the other namespaces to edit as the page is locked... Dammit. Oh well, I dealt with one problem. MegaPleb • Dexter111344 • Complain here 02:02, 23 July 2009 (UTC)
- I did it. I does! Sir Modusoperandi Boinc! 02:13, 23 July 2009 (UTC)
edit Naked people
Why are they in here? —The preceding unsigned comment was added by Fllour (talk • contribs)
- You could at least have the decency to point out who these naked people are so that we can follow and photograph them. Jerk. Sir Modusoperandi Boinc! 22:28, May 1, 2010 (UTC)
I mean in general. —The preceding unsigned comment was added by Fllour (talk • contribs)
- Well, you see, the thing you have to remember is that Pizza Pops go right through you. Sir Modusoperandi Boinc! 04:22, May 2, 2010 (UTC)
edit Removal Request
Hey,
I'm trying to remove a piece of personally identifying information from an archive in order to stop it showing up in search engine results. This is purely for privacy reasons. The changes I am trying to make can be found this diff.
I have tried to make the changes twice, but the change is reverted by the user Lollipop. Could an admin please allow the changes to the page be made, or explain to me why the changes are being reverted.
Thanks
- Hi. As you mentioned my name, it seems like i'll be the one to explain this to you. If something is private, and you don't want it shown in a search engine, there is a few things you can do. One thing is to add __NOINDEX__ to the page with the approval of admins. Other than that, please don't remove content from archives without permission. Cheers. -- 16 October 2011, at 22:01
- Thank you for explaining - I shall contact an admin :)
|
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:Complaints_Department?oldid=5522287
|
CC-MAIN-2015-06
|
refinedweb
| 7,569
| 72.87
|
How to handle cookies inside apollo-server-lambda
apollo-server-lambda github
apollo-server-lambda typescript
apollo server set cookie
apollo-server forward headers
apollo-server cors
apollo server context
apollo-server response headers
set cookies inside a lambda serverless with apollo-server-lambda
I am migrating from apollo-server to the serverless version. Is there a way I can access the response object or another way to set cookies?
context: ({ event, context }) => ({ headers: event.headers, functionName: context.functionName, event, context, }),
I was expecting in the context to have access to the res object like it was in the apollo-server.
I couldn't find a way to do that using apollo-server-lambda, so what a I did was use apollo-server-express and serverless-http in conjuction. The code below is using import/export because I am using typescript.
serverless-http accepts a variety of express-like frameworks.
import express from 'express'; // <-- IMPORTANT import serverlessHttp from 'serverless-http'; // <-- IMPORTANT import { ApolloServer } from 'apollo-server-express'; // <-- IMPORTANT import typeDef from './typeDef'; import resolvers from './resolvers'; export const server = new ApolloServer({ typeDef, resolvers, context: async ({ req, res }) => { /** * you can do anything here like check if req has a session, * check if the session is valid, etc... */ return { // things that it'll be available to the resolvers req, res, }; }, }); const app = express(); // <-- IMPORTANT server.applyMiddleware({ app }); // <-- IMPORTANT // IMPORTANT // by the way, you can name the handler whatever you want export const graphqlHandler = serverlessHttp(app, { /** * **** IMPORTANT **** * this request() function is important because * it adds the lambda's event and context object * into the express's req object so you can access * inside the resolvers or routes if your not using apollo */ request(req, event, context) { req.event = event; req.context = context; }, });
Now for instance you can use res.cookie() inside the resolver
import uuidv4 from 'uuid/v4'; export default async (parent, args, context) => { // ... function code const sessionID = uuidv4(); // a example of setting the cookie context.res.cookie('session', sessionID, { httpOnly: true, secure: true, path: '/', maxAge: 1000 * 60 * 60 * 24 * 7, }); }
Deploying with AWS Lambda - Apollo Server, This means they will vary for Express, Koa, Lambda, etc. This block of code is setting up a new GraphQL server, using Apollo Server 2.0. no public access to the schema or any fields, like an internal tool or maybe an independent simply pass through the headers or cookies to your REST endpoint and let it do the work . −
You can use the apollo-server-plugin-http-headers package.
Usage is as simple as this from within your resolvers:
context.setCookies.push({ name: "cookieName", value: "cookieContent", options: { domain: "example.com", expires: new Date("2021-01-01T00:00:00"), httpOnly: true, maxAge: 3600, path: "/", sameSite: true, secure: true } });
Authentication - Apollo Server, When I reload the page the initial page is rendered on server apps that don't use express? Currently using Lambda to handle requests�
You need a way to set the response headers in your resolvers.
What you can do is to set a value to the context object in your resolver.
const resolver = (parent, args, { context }) => { context.addHeaders = [{ key: 'customheader', value: 'headervalue'}] }
You can catch the context in
willSendResponse event in the server lifecycle by creating a Apollo Server plugin. You can then add your headers from
customHeaders property to the
GraphQLResponse object.
const customHeadersPlugin = { requestDidStart(requestContext) { return { willSendResponse(requestContext) { const { context: { addHeaders = [] } } = requestContext.context addHeaders.forEach(({ key, value }) => { requestContext.response.http.headers.append(key, value) }) return requestContext } } } }
You need to load the plugin in Apollo Server.
const server = new ApolloServer({ typeDefs, resolvers, plugins: [customHeadersPlugin], context: ({ context }) => ({ context }) })
Now you've got a way to modify the response headers in your resolvers. To be able to set a cookie you can either set the
Set-Cookie header manually with a cookie string or using a cookie library.
Thanks to Trevor Scheer of the Apollo GraphQL team for pointing me in the right direction when I needed to implement this myself.
Apollo request object missing cookies � Issue #1791 � apollographql , I am currently using nextJS with apollo and it's completely unusable for me together to host and review code, manage projects, and build software together. index.js server const cookieParser = require("cookie-parser"); this page to have SSR (and to be a lambda) for SEO purposes and remove Apollo Server Lambda slow performances I'm using Apollo Server Lambda to create some APIs hosted on AWS API Gateway and AWS Lambda. But what I noticed is that performances are really slow when the result has many fields.
Apollo Client does not pass cookies � Issue #4190 � apollographql , npm install express apollo-server-express cors bcrypt jsonwebtoken. Next, create an app.js file. In here, we're going to first handle the login process. The id and email values are now available inside our resolver(s). I am writing a graphql server component on AWS Lambda (NOT using graphql-server). On the client side I'm using apollo-client. On the response of the lambda function I'm setting const response =
How to authenticate using GraphQL Cookies and JWT, We'll implement this on an Apollo Server that uses Prisma as the ORM of There are several ways we can do this: via a cookie if you are strictly This file will handle whether a user is logged in or not when accessing a protected route. Inside decodedToken.js , we'll simply verify the token of the user�.
JWT authentication with Apollo Server 2: tips and tricks, I defined my Lambda integration in API Gateway using a stage variable. Why do I get an "Internal server error" and a 500 status code when I� There are some surprises that are more mild, like biting into a chocolate chip cookie only to discover it’s actually a raisin cookie. Your reaction will probably depend on how much you like or hate raisins, because no one loves raisins. I think everyone loves a surprise fudge filling inside an already flavorful and ridiculously soft cookie.
- I had a lot of problems when trying to make both servers, Express and Lambda, work together with cookies using this method... the best solution was to remove cookies and sessions and use JWT
- this is great, ty
|
http://thetopsites.net/article/59321494.shtml
|
CC-MAIN-2020-45
|
refinedweb
| 1,033
| 52.9
|
Play with a random code
Before I tell what is
dp? I would like you to go through an example code.
Some random code:
def F(n): if n == 0: return 0 else: result = 0 for i in range(n): result += F(i) return result+n print F(int(raw_input()))
If you observe the function
F(n) is calculating
(2**n)-1 with a time complexity of O(2^n). It can be done in O(log2(n)) also, but that’s not the concern here.
The above code is an example of Exhaustive Search where we are exploring every possible branch that can be formed from a set of whole numbers up to n (which is passed as the argument and for loop is used for that). Let’s construct the decision tree (recursive call stack) for the above function.
Let a call be made with
F(3), now three branches will be formed for each number in the set S (S is set of whole numbers up to n). I have taken n = 3, coz it will be easy for me to make the diagram for it. You can try will other larger numbers and observe the recursion call stack.
3 /| \ 0 1 2 ----> the leftmost node is returns 0 coz (n==0) it's the base case | /\ 0 0 1 | 0 ----> returns 0
So here you have explored every possibility branches. If you try to write the recursive equation for the above problem then:
T(n) = 1; n is 0 = T(n-1) + T(n-2) + T(n-3) + ... + T(1); otherwise
Here,
T(n-1) = T(n-2) + T(n-3) + ... T(1). So, T(n-1) + T(n-2) + T(n-3) + ... + T(1) = T(n-1) + T(n-1)
So, the Recursive equation becomes:
T(n) = 1; n is 0 = 2*T(n-1); otherwise
Now you can easily solve this recurrence relation (or use can use Masters theorem for the fast solution). You will get the time complexity as O(2^n).
Solving the recurrence relation:
T(n) = 2T(n-1) = 2(2T(n-1-1) = 4T(n-2) = 4(2T(n-3) = 8T(n-3) = 2^k T(n-k), for some integer `k` ----> equation 1
Now we are given the base case where
n is
0, so let,
n-k = 0 , i.e. k = n;
Put
k = n in
equation 1,
T(n) = 2^n * T(n-n) = 2^n * T(0) = 2^n * 1; // as T(0) is 1 = 2^n
So, T.C = O(2^n)
If you print the recursion call stack on the console you will get something like this:
F(4) F(0) F(1) F(0) F(2) F(0) F(1) F(0) F(3) F(0) F(1) F(0) F(2) F(0) F(1) F(0) 15
Now you can observe how function calls are being made. So you can see the repetation of the computation of
F(1) and
F(2). Also, if you observe the recursion Tree formed above (each node in the tree is a subproblem of the main problem), you will see that the nodes are repeating (i.e. the subproblems are repeating).
What if we use a memory in our function to store the already computed value and whenever the sub-problems are occurring again (i.e. repeating function calls or repeating nodes) we will use the pre-computed value (this saves time for computing the sub-problems again and again). The approach is also known as Dynamic Programming.
I think the main problem comes in doing the memoization part (i.e. storing the results of expensive function calls and returning the cached result). Again for the what I do is see the decision tree or the recursion call stack (either printed on console or made on notebook) and then figure out where does the repeating nodes (or subproblems) occur. If they occur on same level in the dicision tree then I can just used a simple variable to store the result and that variable can hold the result in that call stack. But if the repeating nodes are occurring on different levels then we might need to use a storage entity like
list and pass them on each function calls so that we can use the computed values as an when required.
Now you can scroll up can once again and look at the recursion call stack (or dicision tree), by now you must have figured out that we would need a
list as the storage entity to store the pre-computed results (as the overlapping sub-problems are occurring on different levels).
Below is the code which uses dynamic programming approach to optimize the above function.
def F(n, dp): result = -1 # base case if n == 0: dp.append(1) # putting dp[0] = 1, as its the base case, 2^0=1 return 1 else: for i in xrange(n): # check if dp array has already that index value computed or not if i < len(dp): result += dp[i] else: result += F(i, dp) dp.append(result+n) return result+n if __name__ == '__main__': n = int(raw_input()) import time start_time = time.time() print F(n, []) print '\n', (time.time()-start_time)
So, that’s how I converted a time consuming recursive program into optimized solution. You should also try to write your solution using dynamic programming approach after understanding the decision tree formed from the recursive solution.
The time complexity of the above function seems to be O(n).
Here I observe that, although that asymptotic time seems to be O(n) but the increase in computation time with respect to the input size doesn’t increase linearly.
Here are some results for program written in python.
The input vs time graph looks like this:
|
http://binomial.me/tutorials/play-with-a-random-code/
|
CC-MAIN-2022-05
|
refinedweb
| 970
| 67.59
|
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Project Help and Ideas » Iphone / RC car help!!!!
I am trying the iPhone to control the RC car project.
I installed the Apache webserver and loaded files car.php and vector.php to /htdocs. I've also wired the R/C and programmed the microcontroller.
I have 2 problems:
1) when I access 98.210.217.50 (my ip)/car.php... Safari says it cannot open the page
2) How do you make a 'named pipe' on Windows?
Can anyone provide step by step on what you did to complete the project and the code used?
Ok, I am able to bring up car.php on my iphone (I was typing the wrong ip)
Now, I just need to know how to create a 'named pipe' in windows.
Any ideas?
I think I have figured it out. I will create a python socket server to make PHP and Python talk. I'm thinking something like this.
import socket
mySocket = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
mySocket.bind ( ( '', 2727 ) )
mySocket.listen ( 5 )
while True:
channel, details = mySocket.accept()
print 'We have opened a connection with', details
txt = channel.recv ( 100 )
print txt
channel.send ( "This is a response test - attempt 1\0" )
channel.close()
Any ideas, or suggestions?
Hi Tripp,
It looks like you have exactly the right idea on the python side. I think you want to move the
mySocket.listen ( 5 )
call inside the while loop. That way you will process one connection, then wait for the next one to come through from the php script.
Humberto
Actually, you only need to call mySocket.listen(5) once. It's part of the server config and basically says the server can hold up to 5 clients in its queue.
Please log in to post a reply.
|
http://www.nerdkits.com/forum/thread/1001/
|
CC-MAIN-2018-22
|
refinedweb
| 312
| 87.42
|
Tor Hildrum wrote: > I have this problem which I thought would be trivial, but I can't > seem to figure out a decent way to do it. > > Say I have the following file: > 10 > -100 <snip> > -108 > --1080 <snip> > 12 <snip> > 20 > > In lack of a better explanation, here is how it works: > A level in the tree follows the following: > x * 10^level > > x * 10^1 belongs to level 1 in the three > x * 10^2 belongs to level 2 in the three. > etc. Here's a different way to look at it: the level in the tree is determined by the length of the string representation of the node. 2 long -> level 1. 3 long -> level 2. etc. > I decided to make this pretty straightforward so I wrote a Node class > and a Tree class. Are you sure you need a Tree class? The Node class itself may have all the facilities needed to manage an entire tree. The tree is then represented by a 'root' node. You'd then have: - root - 10 - 103 - 105 - 12 etc. > A Node has a key which is an integer, as well as some additional > information that isn't relevant to the structure. It also has a link > to it's sibling, which is the next node on the same level. And a link > to it's first child. A different way to implement this is for each node to have a (sorted) list of children and potentially a link to its parent. The story then looks like this: - root with the following children - 10 (with parent = root) with the following children: - 100 - 108 - 12 (with parent = root) etc. You then do not insert siblings, you add children (not tested, but the intention is what counts): class Node(object): def __init__(self, name): self.Name = name self.Parent = None self.Children = [] def addChild(self, newname): """Tries to add the node as a newNode. Returns True if successful, False otherwise.""" # check if the node is a (sub)child if newname[:len(self.Name)] <> self.Name: return False # check if it is a direct descendant if len(newname) == len(self.Name) + 1: newnode = Node(newname) newnode.Parent = self self.Children.append(newnode) self.Children.sort() return True else: # not a direct descendant -> add to one of the children for child in self.Children: if child.addChild(newname): return True # if we arrive here, it means that there's a missing level in the hierarchy -> add it self.addChild(newname[:len(newname)-1]) return self.addChild(newname) def show(self, indentation=0): print ' ' * indentation, '-', self.Name for child in self.Children: child.show(indentation + 2) def __cmp__(self, othernode): """Get sort() to work properly.""" return cmp(self.Name, othernode.Name) def hasChildren(self): return len(self.Children) > 0 def hasSiblings(self): return (self.Parent <> None) and (len(self.Parent.Children) > 1) root = Node('') root.addChild('10') root.addChild('12') root.addChild('0') root.addChild('20') root.addChild('108') root.addChild('5') root.addChild('678') root.show() This implementation will not handle the dot-style leaves properly, you'll need some extra logic for that. It will however 'fill in the blanks', so you can add node '678' without adding nodes '6' and '67' first and auto-sort the nodes. > How do I know if I have a sibling or a child? > Simple, I just check the length: > --------------------------------------------- > if( len(str(node1[key])) == len(str(node2[key])) ): > --------------------------------------------- > > If the length, amount of integers, is the same, they are siblings. With the alternative representation presented, it's more comfortable: - has child: len(self.Children) > 0 - has sibling: (self.Parent <> None) and (len(self.Parent.Children) > 1) > How do I determine that 2080 is not a child of 10. Or how do i determine > that 536798 is not a child of 536780? And how do I determine that it is a child? See my code: just manipulate them as strings and it's suddenly very easy. Same length and the first (length - 1) characters are the same -> siblings. Different length: take the shortest node name; if the other node name starts with that string, it's a child, otherwise they're unrelated. > I can't seem to rack my brains around a solution for this. Maybe it's > my tree-structure that is making this more complex than it should be? Hierarchies are easier if you look at them as families: it's easier to ask a parent how many children it has, than it is to ask one of the siblings if there is any sibling younger than it, then ask that younger sibling if it has any younger siblings, etc. Yours, Andrei
|
https://mail.python.org/pipermail/tutor/2006-December/051450.html
|
CC-MAIN-2017-17
|
refinedweb
| 776
| 74.9
|
Has anyone tried to mock whole classes (instead of mocking only the
objects)?
Classes are, like everything else in Ruby, just objects. This allows
us to mock them just like we would mock any other object. I have been
working on a Rails application (that will be shared with you as soon
as I get it translated, I promise) in which I needed to do exactly
that.
The class I wanted to mock is responsible for communicating to my
back-end database and fetching the appropriate objects (instances of
itself). The object-finding functionality is implemented as class
level methods. If I need the N latest headlines from a reporter named
‘johnson’, I just need to call `Headline.latest(N, “johnson”).
If I need to test a class that needs to do a Headline.latest call as
part of its job, I don’t want to populate the database with real data
(and slow down my tests as I wait for a connection) because I mostly
trust that Headline works. It has its own tests to assure that. So I
mock the Headline class to make sure my class under test makes the
correct calls.
I have come up with a simple way to that in Ruby and I am mostly
satisfied with the results, but I would like to get some feedback from
the community. The source is here:
class CacheReporter < MotiroReporter
def initialize(headlines_source=Headline)
@headlines_source = headlines_source
end
def latest_headlines return @headlines_source.latest(3, 'mail_list') end
end
class CacheReporterTest < Test::Unit::TestCase
def test_reads_from_database
FlexMock.use do |mock_headline_class|
mock_headline_class.should_receive(:latest).
with(3, ‘mail_list’).
once
reporter = CacheReporter.new(mock_headline_class) reporter.latest_headlines end end
end
The main change here can be seen on the CacheReporter constructor. If
I were not testing, it wouldn’t even be written, I would just use the
Headline class wherever I wanted. But instead of directly using the
Headline class inside its methods, it receives it on the constructor.
This is what allows us to mock the behavior.
Has anyone done anything similar? Can the code be made simpler?
Cheers,
Thiago A.
|
https://www.ruby-forum.com/t/mocking-whole-classes/56750
|
CC-MAIN-2021-25
|
refinedweb
| 348
| 65.42
|
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hello everybody,
I'm a real noob so I would realy apreciate if someone would tell me what's wrong with my code. First I just want to use Ascii code to convert an existing video (not using webcam) to text. So I copy past the script- changing it a litle bit after I had some error with font- Anyway, now I dont have an error message but the sketch is just a black screen ....
Here is the code :
import processing.video.*; Movie video; boolean cheatScreen; String letterOrder = " .`-_':,;^=+/\"|)\\<>)iv%xclrs{*}I?!][1taeo7zjLu" + "nT#JCwfy325Fp6mqSghVd4EgXPGZbYkOA&8U$@KHDBWNMR0Q"; char[] letters; float[] bright; char[] chars; PFont font; float fontSize = 1.5; void setup() { size(1440, 1080); video = new Movie(this, "Torrent.mts"); video.play(); int count = video.width * video.height; font = loadFont("ProcessingSansPro-Regular-48.vlw"); letters = new char[256]; for (int i = 0; i < 256; i++) { int index = int(map(i, 0, 256, 0, letterOrder.length())); letters[i] = letterOrder.charAt(index); } chars = new char[count]; bright = new float[count]; for (int i = 0; i < count; i++) { bright[i] = 128; } } void MovieEvent(Movie m) { m.read(); } void draw() { background(0); pushMatrix(); float hgap = width / float(video.width); float vgap = height / float(video.height); scale(max(hgap, vgap) * fontSize); textFont(font, fontSize); int index = 0; video.loadPixels(); for (int y = 1; y < video.height; y++) { translate(0, 1.0 / fontSize); pushMatrix(); for (int x = 0; x < video.width; x++) { int pixelColor = video.pixels[index]; int r = (pixelColor >> 16) & 0xff; int g = (pixelColor >> 8) & 0xff; int b = pixelColor & 0xff; int pixelBright = max(r, g, b); float diff = pixelBright - bright[index]; bright[index] += diff * 0.1; fill(pixelColor); int num = int(bright[index]); text(letters[num], 0, 0); index++; translate(1.0 / fontSize, 0); } popMatrix(); } popMatrix(); if (cheatScreen) { set(0, height - video.height, video); } } void keyPressed() { switch (key) { case 'g': saveFrame(); break; case 'c': cheatScreen = !cheatScreen; break; case 'f': fontSize *= 1.1; break; case 'F': fontSize *= 0.9; break; } }
Also, once I resolved this problem I would like to know how to use only a restreign number of letters (in a way that all the parts off the image that uses others letters will deseaper)
And sorry for my broken english, I'm french ;)
Thank you all for the help
Answers
Did you test that the file plays at all?
I never saw mts as an ending
Yes it plays! I tried it before Damn. An other guess?
Does the video play at all in Processing in a simple sketch?
If not, convert from mts to mov or mp4
As i said above, yes it plays en mts as a simple sketch. Of course I checked that first :)
@sevc -- good to know! I asked because you only said "it plays" after Chrisr asked if it plays "at all." Both the original question and your answer were ambiguous; that could have meant you were able to play it in Media Player or QuickTime.
|
https://forum.processing.org/two/discussion/26198/existing-video-to-text-with-ascii
|
CC-MAIN-2020-45
|
refinedweb
| 507
| 75.91
|
In a previous article.
Finding pixels with the least variance
The idea is to identify those pixels in a bunch of pictures that don't change much from picture to picture, even if the subject and lighting conditions do change. There are of course many ways to define change but the one we explore here is the called the variance. Basically we compute for each pixel in a set of pictures its average and then sum (again for each pixel) the differences with its average in each picture. The pixels (or subpixels) that have the smallest sums are likely candidates for being hot or stuck.
Using PIL and Numpy for efficient number crunching
Obviously calculating the variance for each subpixel in a few hundred pictures will entail some serious number crunching if these pictures are from a megapixel camera. We therefore better use a serious number crunching library, like Numpy. Because we use Python 3, I recommend fetching the PIL and Numpy packages from Christoph Gohlke's page if you use Windows.
The code is shown below. The program will open all images given as arguments one by one using PIL's
Image.open() function (line 9). PIL images can be directly converted by Numpy by the
array() function. Because we might run out of memory we do not keep all images in memory together but process them one by one and calculate the variance using the so called on-line algorithm. The names of the variables used are the same as in the Wikipedia article
argmin() function.
import Image import numpy as np import sys from glob import glob first = True for arg in sys.argv[1:]: for filename in glob(arg): pic = Image.open(filename) pix = np.array(pic) if first: first = False firstpix = pix n = np.zeros(firstpix.shape) mean = np.zeros(firstpix.shape) M2 = np.zeros(firstpix.shape) delta = np.zeros(firstpix.shape) else: if pix.shape != firstpix.shape: print("shapes don't match") continue n += 1 delta = pix - mean mean += delta/n M2 += delta*(pix - mean) mini = np.unravel_index(M2.argmin(),M2.shape) print('min',M2[mini],mini)
Results and limitations).
sorti = M2.argsort(axis=None) print(sep="\n",*[(i,M2[i]) for i in [np.unravel_index(i,M2.shape) for i in sorti[:10]]])
|
http://michelanders.blogspot.com/2011/08/
|
CC-MAIN-2017-22
|
refinedweb
| 380
| 67.15
|
Details
- Type:
Bug
- Status: Resolved (View Workflow)
- Priority:
Critical
- Resolution: Cannot Reproduce
-
- Labels:
- Similar Issues:
Description
Jenkins 2.49 Don't know what is actually the problem. Memory dump available for in-house usage via Dropbox etc. if needed. Please send me an email.
Seems to be related to Groovy bug. Don't know when Jenkins groovy will be update from 2.4.8 to 2.4.9 version .
Attachments
Issue Links
- is related to
JENKINS-33358 Groovy and PermGen memory leak
- Resolved
- relates to
JENKINS-43197 Deadlock when many jobs running
- Resolved
Activity
Sam Van Oort I will look into this. Groovy bug has java class to reproduce the problem in groovy but I understand that it is not quite relevant.
Is there way to give bundled groovy command line parameter as suggested as fast fix?
-Dgroovy.use.classvalue=true
Not sure if this will help, but in our /etc/sysconfig/jenkins you can added:
JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Dgroovy.use.classvalue=true -Xms4096m -Xmx4096m -XX:MaxPermSize=1024m -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -Dhudson.model.ParametersAction.keepUndefinedParameters=true"
We also run some crazy groovy code in some of jenkins jobs and we tend to run out of memory too, so we've installed this plugin to help us track java resources:
And finally, we periodically (once per hour i think) run this groovy script to clean things up:
import net.bull.javamelody.*; before = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory(); System.gc(); after = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory(); println I18N.getFormattedString("ramasse_miette_execute", Math.round((before - after) / 1024));
And we launch it like this:
java -jar jenkins-cli.jar -noCertificateCheck -i id_rsa -s JENKINS_URL groovy my_groovy_script.groovy'
You can probably put that groovy script as a jenkins job have it run periodically.
Ever since we we run that System.gc() command, we've never run out of memory. We run 100s of jobs a day a AWS t2.medium without any downtime for months at a time. Before I did these things, we were running on a huge instance and we had to restart it every week or so.
Hope this helps!
Hello!
We could not make reproducible test case with small effort.
However, the parameter tuning from Groovy ticket seem to resolve the state. Now Jenkins has been working for one whole week without problems!
From another bug where it was mentioned, -Dgroovy.use.classvalue=true seems to have fixed this for us as well. We actually don't use anything crazy at all in Groovy (I use it to read load a 3 line text file into a parameter list and there are try/catch blocks around that read operation even to fail to a default "" response), nonetheless the bugs in this version brought the entire platform to a state of frequent unavailability. There are other bugs open related to this but it's hard to picture how this is "resolved" when it's such a major flaw that has already been fixed in groovy itself, though Jenkins still needs to upgrade to using that newer version internally.
Heikki Simperi We understand this is a critical issue for you, and in order to solve it, can you provide an isolated example that will reproduce this without depending on your infra?
I think the reason we're hesitant to update Groovy more is that every update seems to introduce a new issue of this sort, which generally requires a significant time investment from someone deeply specialized in this area.
|
https://issues.jenkins.io/browse/JENKINS-42637?focusedCommentId=296584&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2021-49
|
refinedweb
| 580
| 55.44
|
SYNOPSIS
#include <signal.h>
int sigaction(int signum, const struct sigaction *act,
struct sigaction *oldact);
feature test macro requirements for glibc (see feature_test_macros(7)):
sigaction(): _POSIX_C_SOURCE >= 1 || _XOPEN_SOURCE || _POSIX_SOURCE
DESCRIPTION
The sigaction() system call is used to change the action taken by a
process on receipt of a specific signal. (see signal(7) for an over-
view of signals.) sig-
nal sig-
nal which triggered the handler will be blocked, unless the SA_NODEFER
flag is used.
If signum is SIGCHLD, do not transform children into zombies
when they terminate. see also waitpid(2). this flag is
only meaningful when establishing a handler for SIGCHLD, or
when setting that signal's disposition to SIG_DFL..
SA_NODEFER
Do not prevent the signal from being received from within
its own signal handler. This flag is only meaningful when
establishing a signal handler. SA_NOMASK is an obsolete,
non-standard synonym for this flag.
SA_ONSTACK
Call the signal handler on an alternate signal stack pro-
vided by sigaltstack(2). if an alternate stack is not
available, the default stack will be used. This flag is
only meaningful when establishing a signal handler.
SA_RESETHAND
Restore the signal action to the default state once the sig-
nal handler has been called. This flag is only meaningful
when establishing a signal handler. SA_ONESHOT is an obso-
lete, non-standard synonym for this flag.
SA_RESTART
Provide behavior compatible with BSD signal semantics by
making certain system calls restartable across signals.
This flag is only meaningful when establishing a signal han-
dler. see signal(7) for a discussion of system call
restarting.
SA_SIGINFO (since Linux 2.2)
The signal handler takes 3 arguments, not one. In this
case, sa_sigaction should be set instead of sa_handler.
This flag is only meaningful when establishing a signal han-
dler. */
si_signo, si_errno and si_code are defined for all signals. (si_errno
is generally unused on Linux.) The rest of the struct may be a union,
so that one should only read the fields that are meaningful for the
given signal:
* POSIX.1b signals and SIGCHLD fill in si_pid and si_uid.
* POSIX.1b timers (since Linux 2.6) fill in si_overrun and si_timerid.
The si_timerid field is an internal ID used by the kernel to identify
the timer; it is not the same as the timer ID returned by timer_cre-
ate(2).
* SIGCHLD fills in si_status, si_utime and si_stime. The si_utime and
si_stime fields do not include the times used by waited-for children
(unlike getrusage(2) and time gener-
ated.ILL signal:
ILL_ILLOPC illegal opcode
ILL_BADSTK internal stack error
The following values can be placed in si_code for a SIGFPE signal:
The following values can be placed in si_code for a SIGSEGV signal:
SEGV_MAPERR address not mapped to object
SEGV_ACCERR invalid permissions for mapped object
The following values can be placed in si_code for a SIGCHLD signal:
CLD_EXITED child has exited
CLD_KILLED child was killed
CLD_DUMPED child terminated abnormally
CLD_TRAPPED traced child has trapped
CLD_STOPPED child has stopped
CLD_CONTINUED stopped child has continued (since Linux 2.6.9)
POLL_HUP device disconnected
RETURN VALUE
sigaction() returns 0 on success and -1 on error.
ERRORS
EFAULT act or oldact points to memory which is not a valid part of the
process address space.
EINVAL An invalid signal was specified. This will also be generated if
an attempt is made to change the action for SIGKILL or SIGSTOP,
which cannot be caught or ignored.
CONFORMING TO
POSIX.1-2001, SVr4.
NOTES
a child created via fork(2) inherits a copy of its parent's signal dis-
positions. during an execve(2), the dispositions of handled signals
are reset to the default; the dispositions of ignored signals are left
unchanged. per-
form a wait(2) or similar.
POSIX.1-1990 only specified SA_NOCLDSTOP. POSIX.1-2001 added SA_NOCLD-
WAIT, implementa-
tion allowed the receipt of any signal, not just the one we are
installing (effectively overriding any sa_mask settings).
sigaction() can be called with a null second argument to query the cur-
rent signal handler. It can also be used to check whether a given sig-
nal is valid for the current machine by calling it with null second and
third arguments.
ment of type struct sigcontext. See the relevant kernel sources for
details. This use is obsolete now.
BUGS
In kernels up to and including 2.6.13, specifying SA_NODEFER in
sa_flags prevents not only the delivered signal from being masked dur-
ing execution of the handler, but also the signals specified in
sa_mask. This bug was fixed in kernel 2.6.14.
EXAMPLE
see mprotect(2).
SEE ALSO
kill(1), kill(2), killpg(2), pause(2), sigaltstack(2), signal(2), sig-
nalfd(2), sigpending(2), sigprocmask(2), sigqueue(2), sigsuspend(2),
wait(2), raise(3), siginterrupt(3), sigsetops(3), sigvec(3), core(5),
signal(7)
COLOPHON
This page is part of release 3.23 of the Linux man-pages project. A
description of the project, and information about reporting bugs, can
be found at.
|
http://www.linux-directory.com/man2/rt_sigaction.shtml
|
crawl-003
|
refinedweb
| 826
| 65.73
|
When trying to loop several `webbrowser.open()` with `x-callback` to Due in, only the first call works
I've been trying to automate the export of Omnifocus tasks to Due, & for that I need to use Due's url scheme repeatedly in the script. You can see an example in this gist: . It looks like
webbrowser.open(url1) print 'Done 1.' webbrowser.open(url2) print 'Done 2.'
No exceptions are raised, & the output is
Done 1. Done 2.
As if all calls were made. Why does it not open the url, come back to Pythonista, & open the next one?
You cannot really "pause until i get a callback". You may be able to do this as a ui, by waiting for on_screen to go away, then come back.
or, make everything up to the first open one script, then everything after another script, and then whatever you want after as a third, and have the callbacks open the appropriate script. maybe pickle your state so you can pick up where you nleft off.
Browsing other answers, I saw people using
Scene.pause()for this, and I coupled it with
time.sleep()in a loop, to wait some time before repeating. It seems to work fine, but I'm not sure I used
sceneright, and maybe it's just the
sleep()that stops the script until the app regains foreground.
I did think about chaining scripts though, thank you.
After a few more tests, I found that calling
time.sleep(n), where
nis anything large enough to leave enough time to click on Ok to get to the app you want to open works. Apparently, the clock stops when the Pythonista isn't active, and so it doesn't stop waiting until we get back...
import console,time #launch you app time.sleep(1) while console.is_in_background(): time.sleep(1) #launch your app time.sleep(1) while console.is_in_background(): time.sleep(1)
|
https://forum.omz-software.com/topic/2614/when-trying-to-loop-several-webbrowser-open-with-x-callback-to-due-in-only-the-first-call-works
|
CC-MAIN-2020-45
|
refinedweb
| 322
| 84.98
|
Combine selected points into one point
I would love to select two or more points and say 'combine points into one point'. This function would calculate the average center of selected points + calculate average bcps for that one point. Would be handy for cleaning paths, for example after auto-tracing. In the next update maybe?
That would be an excellent extension :)
It would also be an excellent native function…
I've written an extension that does this.
If its not what you had in mind, or if you have any other ideas for extensions, let me know.
I'm not yet satisfied with how it updates control points adjacent to "merged" points.
He
cool!
Move your scripts that have callbacks from the menu to the root folder.
From there you can import everything inside a RoboFont extension.
Hmm, I'm not sure I understand.
You're saying that any script that corresponds to a menu item needs to be in the root folder (ie. the "script root" in the Extension Builder)?
no, but you will have access to other python scripts and modules if they are on the same level or deeper
(this isn't a limitation of RoboFont, this is default python behavior if the module isn't added to the site packages or
sys.path)
scripting Root - test.py # test can import testFolder and all sub py files - testFolder - __init__.py - otherFile.py # otherFile can not import test.py or otherTestFolder - otherTestFolder - __init__.py
hope this makes sense
Ah, I see what you mean.
Alternately, RoboFont could add the root folder of the extension to the python path when it invokes a script from that extension.
I'm not sure how you're invoking extension scripts in Robofont...
Anyhow, I've reorganized the extension per your suggestion.
Thanks
Alternately, RoboFont could add the root folder of the extension to the python path when it invokes a script from that extension.
I’m not sure how you’re invoking extension scripts in Robofont…
mm, I think this will be a mess after a while.
It's already super easy to add a path to
sys.path:)
Yes, you're right.
I moved this extension to
a progress bar example:
from defconAppKit.windows.progressWindow import ProgressWindow # CurrentFontWindow is only available only in RoboFont 1.3 from mojo.UI import CurrentFontWindow import time progress = ProgressWindow("my progress") time.sleep(2) progress.close() ## attach as sheet to a window progress = ProgressWindow("my progress", parentWindow=CurrentFontWindow().window()) time.sleep(2) progress.update("action....") time.sleep(2) progress.close() ## with tickcount count = 20 progress = ProgressWindow("my progress", tickCount=count) for i in range(count): time.sleep(.1) progress.update("action....%s" %i) progress.close()
cool, thanks.
Hmm, that can't be right.
for i in range(count): time.sleep(.1) progress.update("action....%s" %i)
You're formatting a string with an int but using the string format descriptor:
%s
Also, shouldn't
progress.update()take an int argument?
Lastly,
from mojo.UI import CurrentFontWindow
Yields:
importError: cannot import name CurrentFontWindow
in RoboFont 1.2
oho, just saw this:
CurrentFontWindow is only available only in RoboFont 1.3
oh, I get it. You call
progress.update()count times: that's why there's not int argument.
Hmm, the
help()built-in function doesn't work in the RoboFont scripting window.
from defconAppKit.windows.progressWindow import ProgressWindow print dir(ProgressWindow) help(ProgressWindow.update)
Yields:
NameError: name 'help' is not defined
yeah,
helpis great, didn't know it was available. This will be added in the next version.
thanks
ps: a small workaround:
from defconAppKit.windows.progressWindow import ProgressWindow import pydoc ## this is actually happening in the built-in 'help' pydoc.help(ProgressWindow.update)
|
https://forum.robofont.com/topic/124/combine-selected-points-into-one-point/8?lang=en-US
|
CC-MAIN-2022-27
|
refinedweb
| 618
| 60.31
|
dynamic partner link in BPEL2.0 in SOA Suite 11.1.1.6.0521236 Aug 1, 2012 9:33 AM
I would like to use the dynamic partner links in BPEL2.0 in SOA Suite 11.1.1.6.0 but I get this error:
As I know the dynamic partner link (in BPEL2.0) was not supported in 11.1.1.5.0 but it is in 11.1.1.6.0. So it should work but unfortunately it doesn't.As I know the dynamic partner link (in BPEL2.0) was not supported in 11.1.1.5.0 but it is in 11.1.1.6.0. So it should work but unfortunately it doesn't.
<env:Fault xmlns: <faultcode>ns0:selectionFailure</faultcode> <faultstring>fromValue is nota sref:service-ref element</faultstring>
Could anyone please help me?
Thanks,
V.
<assign name="Assign_set_EndpointReference_InventoryConfirmation"> <copy> <from><literal><sref:service-ref xmlns: <EndpointReference xmlns=""> <Address></Address> </EndpointReference> </sref:service-ref></literal></from> <to partnerLink="PL_InventoryConfirmation"/> </copy>
1. Re: dynamic partner link in BPEL2.0 in SOA Suite 11.1.1.6.0veejai24 Aug 1, 2012 1:45 PM (in response to 521236)Are you having everthing as per the document ?
1. Multiple services that use the same portType in webservice
2. Created a webservice call in composite.
3. Check whether you have the reference tag for the webservice in composite.xml file
4. Do you have the xml frgament in assign.
Thanks,
Vijay
2. Re: dynamic partner link in BPEL2.0 in SOA Suite 11.1.1.6.0521236 Aug 2, 2012 3:32 PM (in response to veejai24)Hi Vijay,
thanks for the answer!
I was thinking that the dynamic partner link is good tool to change the endpoint URL in runtime. So I didn't change my BPEL process just added an assign operation before using the partner link. Am I mistaken? Do I have to change the WSDL as well? I want to use services which use the same WSDL but their endpoints are different.
1. -> I think I don't have to define multilple services (I have just one kind of services / one WSDL)
2. -> I am not sure what you are thinking of but I defined the partner link in composite.xml and linked to the BPEL process
3. -> I have it
4. -> I have it, I copied it into my original question.
Thanks for the help!
V.
3. Re: dynamic partner link in BPEL2.0 in SOA Suite 11.1.1.6.0Vamseeg-Oracle Aug 2, 2012 10:39 PM (in response to 521236)It should work as long as you are using below syntax:
<assign>
<copy>
<from>
<literal>
<sref:service-ref>
<EndpointReference xmlns="">
<!--Address></Address-->
<ServiceName xmlns:ns1:UnitedLoan</ServiceName>
</EndpointReference>
</sref:service-ref>
</literal>
</from>
<to partnerLink="LoanService"/>
</copy>
</assign>
Edited by: vamseeg on Aug 2, 2012 3:39 PM
4. Re: dynamic partner link in BPEL2.0 in SOA Suite 11.1.1.6.0veejai24 Aug 3, 2012 8:32 AM (in response to 521236)Please follow the below link
Thanks,
Vijay
5. Re: dynamic partner link in BPEL2.0 in SOA Suite 11.1.1.6.0521236 Aug 7, 2012 9:35 AM (in response to Vamseeg-Oracle)Thanks Vamseeg,
probably my problem was that I didn't put the ServiceName element in the EndpointReference as the other elements were the same.
Great, thanks again!
V.
6. Re: dynamic partner link in BPEL2.0 in SOA Suite 11.1.1.6.0KeithFosberg Oct 29, 2012 2:35 PM (in response to veejai24)That link leads to instructions for an async partner link. Is it appropriate for synchronous partner links?
Frankly -- this documentation is really irritating. It makes a lot of assumptions and is far to vague in places. The very first thing it says is "Create a WSDL file that contains multiple services that use the same portType." but it doesn't specify which wsdl (I have several in my project) and seems to be drawn from some example code with no explanation.
I have a wsdl for a partner link that looks like this:
Edited by: Keith Fosberg on Oct 29, 2012 7:31 AM
<?xml version="1.0" encoding="UTF-8"?> <wsdl:definitions <plnk:partnerLinkType <plnk:role </plnk:partnerLinkType> <wsdl:types> <schema xmlns="" xmlns: <import namespace="" schemaLocation="xsd/f17borrowerCheck.xsd"/> </schema> <xsd:schema xmlns: >
7. Re: dynamic partner link in BPEL2.0 in SOA Suite 11.1.1.6.0KeithFosberg Oct 29, 2012 6:06 PM (in response to KeithFosberg)The answer is "no, this is inapplicable to synchronous links"
8. Re: dynamic partner link in BPEL2.0 in SOA Suite 11.1.1.6.0918815 Oct 29, 2013 3:46 PM (in response to KeithFosberg)
Hi All,
up to now 'am able to get the values as below except "PortName"
<partnerRole>
<role name="ProcessProvider">
<ServiceName>{}process_client_ep</ServiceName>
<PortName/>
<PortType>{}Process</PortType>
<Address></Address>
</role>
</partnerRole>
please help me out to fetch value of a PortName.
Thanks
Shankar
9. Re: dynamic partner link in BPEL2.0 in SOA Suite 11.1.1.6.0Bibhuti Bhusan Feb 25, 2014 8:32 AM (in response to 918815)
Hi,
I am using Oracle SOA Suite 11.1.1.7. While implementing the dynamic partner link using BPEL 2.0 getting the following exception.
<bpelFault><faultType>0</faultType><selectionFailure xmlns=""><part name="summary"><summary>fromValue is not a sref:service-ref element</summary></part></selectionFailure></bpelFault>
I have followed the above thread but could not able to resolve the issue
Your help would be appreciated. Thanks in advance.
Regards, Bibhu
|
https://community.oracle.com/message/10505090
|
CC-MAIN-2017-39
|
refinedweb
| 928
| 60.92
|
Importing multiple schemas with same namespace?
Discussion in 'XML' started by Steve George, Apr 11, 2005.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
XML conforming to multiple Schemas simultaneouslyAlex Rootham, Aug 26, 2003, in forum: XML
- Replies:
- 2
- Views:
- 594
- Richard Tobin
- Aug 28, 2003
- Replies:
- 1
- Views:
- 427
- David Carlisle
- Dec 15, 2004
XSD-Question: importing Schemas with same namespaceSascha Kerschhofer, Aug 16, 2005, in forum: XML
- Replies:
- 1
- Views:
- 1,113
- Priscilla Walmsley
- Aug 16, 2005
- Replies:
- 2
- Views:
- 352
- plb
- Feb 8, 2005
Consequences of importing the same module multiple times in C++?Robert Dailey, Oct 24, 2008, in forum: Python
- Replies:
- 2
- Views:
- 285
- Aaron Brady
- Oct 26, 2008
|
http://www.thecodingforums.com/threads/importing-multiple-schemas-with-same-namespace.169138/
|
CC-MAIN-2014-41
|
refinedweb
| 152
| 75.54
|
News: Content of this SCN Doc will be maintained now in wiki page
Purpose
With the following hints you will be able to configure the use of Service Level Agreement SLA to make sure that messages are processed within the defined period of time.
For configuring SLA you should get this document SLA Management from SAP SMP.
Here I will try to give you some hints for SLA configuration.
The screenshots are taken from a Solution Manager 7.1 SP05 with a Incident Management standard scenario configuration.
Overview
By setting up the SLA Escalation Management mechanism the system monitors when deadlines defined in the SLA parameters have been exceeded in the service process and which follow-up processes would be triggered. For example, email notifications will be sent to upper levels in the Service Desk organization like to responsible IT Service Managers to inform them immediately about expiration of deadlines and SLA infringements. Thereby, IT Service Managers are only involved in the ticketing process when it is really necessary.
Definitions
IRT on the created incident has to be performed at the latest.
When the processor starts processing the incident then it is enriched with the timestamp “First Reaction” for actual first reaction by the processor.
M.
When the incident is closed by the reporter (in the case that a newly created incident is withdrawn or a proposed solution is confirmed) then the incident is
enriched with the timestamp “Completed” for actual incident completion.
Step 1. Copy transaction type SMIN -> ZMIN
We are going to work with ZMIN transaction type.
Insist here on the fact that you should copy all transaction types into your own namespace or <Z>, <Y> namespace before starting to use Incident Management, copy transaction types, copy status profiles, action profiles, etc…if not your modifications to the standard will be overwritten after the next support package import. This is really important in Solman 7.1!!!
After each Support Patch applications you have the option to use report AI_CRM_CPY_PROCTYPE “Update option” to update already copied transaction types with new shipped SAP standard configuration.
Step 2. Define Service Profile & Response Profile
Transaction: CRMD_SERV_SLA (/nSPRO ->SAP Solution Manager IMG -> SAP Solution Manager -> Capabilities (optional) ->Application Incident Management (service Desk) -> SLA Escalations ->Edit Availability and Response Times)
Example:
Factory calendar must be a valid one, see transaction /nscal and notes 1529649 and 1426524.
Note: The usage of Holiday calendar in availability time is not supported by SLA date calculation i.e. you MUST use the option “Factory Calendar” or “All Days Are Working Days”.
Pay attention to the System time zone & user time zone. Check in STZAC (note: 1806990).
Create a response profile:
I would suggest to maintain the times always in MIN.
Make the same for all priorities.
3. Define SLA Determination Procedure
SM34 : CRMVC_SRQM_SDP (SPRO -> SAP Solution Manager IMG ->SAP Solution Manager -> Capabilities (optional) -> Application Incident Management (service Desk) -> SLA Escalations -> SLA Determination Procedure)
Create your own SLA determination procedure:
What is important here is to determine where are the response profiles and service profiles to check first, there are several alternatives:
Possible access sequence:
– Service Contracts
Please note that currently Service Contract Determination is just recommended for upgrading purposes to SAP Solution Manager 7.1 SPS 05 to support already defined configurations. For enabling Service Contracts, the required customizing has to be performed (please keep in mind that usage of Service Contracts in SPS 05 are not supported at the moment by SAP – no adoptions and tests were performed for SPS 05).
– Service Product Item
A Service Profile as well as a Response Profile can be attached to a Service Product. The Service Product can also be assigned to specific master data like to the category of a defined Multilevel Categorization. In case of selecting this category during the incident creation process, the correct Service Product
will be determined as well as its defined Service & Response Profiles.
– Reference Objects (IBase Component)
A Service Profile as well as a Response Profile can be attached to a specific IBase Component. This means, if this IBase Component is entered during the
incident creation process, the related Service & Response Profile will be chosen.
– Business Partners (Sold-To Party)
A Service Profile as well as a Response Profile can be attached to a specific Sold-To Party (e.g. a Customer). This means, if this Sold-To Party is entered to the incident (manually by the Processor or automatically by a defined rule), the related Service & Response Profile will be assigned
The most frequently used are the SLA determination via Service Product item and Business Partners (sold-to party).
If you need your own SLA determination check BAdI Implementation CRM_SLADET_BADI (IMG: Customer Relationship Management -> Transactions -> Settings for Service Requests -> Business Add-Ins -> Business Add-In for SLA Determination).
Now check that you linked this new SLA Determination procedure to ZMIN
4.Define Settings for Durations
Specify the times to be recalculated when the status changes, under “Specify Duration Settings”.
SM30: CRMV_SRQM_DATSTA (/nSPRO-> SAP Solution Manager IMG ->SAP Solution Manager -> Capabilities (optional) -> Application Incident Management (service Desk) -> SLA Escalations ->Define Settings for Durations)
Note 1674375 is having two attached files indicated entries to be inserted and to be deleted.
For solman 7.1 until SP04 included these should be the standard entries:
For SP05 and above these are the default entries:
Date profile is not the data profile of the ZMIN transaction type, this is the date profile of the item category used, usually SMIP, we will see details about this in Step 9.
Note: For SMIV incidents in VAR scenarios status E0010 Sent to Support means that the incident is at solman side so the correct entries are:
Status Profile Status Date Profile Duration type Date type
ZMIV0001 E0010 Sent to Support SMIN_ITEM SRQ_TOT_DUR
ZMIV0001 E0010 Sent to Support SMIN_ITEM SRQ_WORK_DUR
As summary, if the status means that the incident is at processor side the correct entries are:
Status Profile Status Date Profile Duration type Date type
XMIX0001 E000X SMIX_ITEM SRQ_TOT_DUR
XMIX0001 E000X SMIX_ITEM SRQ_WORK_DUR
If the status means that in the incident is at key user side the correct entries are:
Status Profile Status Date Profile Duration type Date type
XMIX0001 E000X SMIX_ITEM SMIN_CUSTL
XMIX0001 E000X SMIX_ITEM SMIN_CU_DURA
XMIX0001 E000X SMIX_ITEM SRQ_TOT_DUR
XMIX0001 E000X SMIX_ITEM SRV_RR_DURA
See the meaning of the Duration fields:
– Duration Until First Reaction:
This period of time is defined within the Response Profile and represents the basis for IRT calculation. Based on the selected incident priority, you should see the same values as defined in the Response Profile (dependencies between Incident Priority Level and “Duration Until First Reaction”).
– Duration Until Service End:
This period of time is defined within the Response Profile and represents the basis for MPT calculation. Based on the selected incident priority, you should see the same values as defined in the Response Profile (dependencies between Incident Priority Level and “Duration Until Service End”).
– Total Customer Duration:
The time duration when an incident message is assigned to the reporter (incident status is set to “Customer Action”, “Proposed Solution” or “Sent to SAP”) is
added and visible via the parameter “Total Customer Duration”.
– Total Duration of Service Transaction:
The time duration for the whole processing of the incident message is added and visible via the parameter “Total Duration of Service Transaction”.
– Work Duration of Service Transaction:
The time duration when an incident message is assigned to the processor is added and visible via the parameter “Work Duration of Service Transaction”.
See the meaning of Date Types fields:
– Notification Receipt:
When an incident message is created by the reporter the system sets the timestamp “Notification Receipt” which represents the initialization of the service start. This timestamp is the basis for all future SLA time calculations.
– First Response By: at the created incident has to be performed at the latest.
– First Reaction:
When the processor starts processing the incident then it is enriched with the timestamp “First Reaction” for actual first reaction by the processor.
– ToDo.
– Completed:
When the incident is closed by the reporter (in the case that a newly created incident is withdrawn or a proposed solution is confirmed) then the incident is enriched with the timestamp “Completed” for actual incident completion.
– Customer Status Changed:
The timestamp “Customer Status Changed” is set every time when the processor changes the status of an incident message to a customer status like “Customer Action”, “Proposed Solution” or “Sent to SAP”.
This information represents at what given point in time the incident was assigned to the reporter.
It is also the basis for IRT & MPT recalculation because customer times do not affect the SLA calculation.
Step 5. Specify Customer Time Status
/nSPRO -> SAP Solution Manager IMG -> SAP Solution Manager -> Capabilities (optional) -> Application Incident Management (service Desk) -> SLA Escalations -> Specify Customer Time Status
Identify non-relevant customer times in the step “Specify Customer Time Status”. That means the clock is stopped while time is spent in these statuses.
Customer times are specified by the user status of an incident message. Defined Customer Times Statuses do not affect the SLA calculation (MPT calculation). This mechanism should prevent mainly for SLA escalations if an incident has to be processed by another person than the processor.
The processor requires additional information from the reporter which is not included at the moment within the created message description. For an adequate processing, the incident will be commented with a request for providing additional information and assigned back to the reporter by the incident status change to “Customer Action”. The duration the reporter requires for enrichment of the incident should be excluded from calculation of SLA times because the processor is not able to take any influence on the time the reporter needs to provide the information (in the worst case the message is sent back to the processor and the MPT would be already exceeded). The period of time the message is on reporter’s side is added to the parameter “Total Customer Duration” and the MPT will be recalculated according to this value.
Step 6. Create a product
If you decide to use the SLA determination based on Service Product Item you need to create a product.
Product INVESTAGATION will be created automatically when you perform in solman_setup for ITSM activity Create Hierarchy for Service products.
Execute transaction COMMPR01, find product ID INVESTIGATION.
Note: Use Unit of Measure MIN
That avoids errors which could be caused be time rounds.
Ensure that SRVP is entered in the Item Cat. Group:
Enter your service and response profiles.
Step 7. Check the Item Categories
SM34: CRMV_ITEM_MA ( /nSPRO IMG -> CRM -> Transactions -> Basic Settings -> Define Item Categories)
You can use SRVP:
Step 8. Check the Item Category Determination
SE16: CRMC_IT_ASSIGN (/nSPRO IMG -> CRM -> Transactions -> Basic Settings -> Define Item Category Determination)
You should see the relation between ZMIN, SRVP and SMIP.
Step 9. Check SMIP Item category
/nSPRO IMG -> CRM -> Transactions -> Basic Settings -> Define Item Categories
Pay attention to the Date Profile.
With these settings the SLA times (IRT and MPT) will be calculated for any created incident message according to the parameters set within “Investigation”.
Step 11. SLA Escalation
The following clarifies how SLA Escalation is working including the configuration of the email notification service.
The SLA Escalation mechanism is used to inform responsible staff like IT Service Managers immediately about expiration of deadlines and SLA infringements.
In the case that an incident message reaches the calculated IRT or MPT timestamp, the systems sets the status automatically at first to “Warning”. If the timestamp is exceeded than the incident’s status is set to “Exceeded”. In both cases an email notification will be triggered to defined partner functions.
Report AI_CRM_PROCESS_SLA is responsible for setting the warning/escalated status values once these thresholds are exceeded.
So firstly ensure that your incidents are receiving the correct status values (IRT/MPT warning/escalated).
Note that these are “additional” status values, which are not reflected in the main status of the incident. To view these status values, make the “Status” assignment block visible in the CRM UI, or view the Incident in the old CRMD_ORDER transaction.
If your incidents are not receiving the correct status values, the e-mail actions will not function correctly.
Then ZMIN_STD_SLA_IRT_ESC/ZMIN_STD_SLA_MPT_ES are intended to be scheduled based on the status of the incident, not directly on the evaluation of the respective durations.
11.1. Maintaining SLA E-mail Actions
In the standard SMIN_STD profile delivered by SAP, the following actions (smartform based) are responsible for generating e-mails once escalation conditions have been reached since SP04:
– ZMIN_STD_SLA_IRT_ESC
– ZMIN_STD_SLA_MPT_ESC
Please see the scheduling/starting conditions to ensure that they are appropriate for your customized transaction type ZMIN and ZMIN_STD action profile.
If you need to send also emails at warning times you will need to create actions:
– ZMIN_STD_SLA_IRT_WRN
– ZMIN_STD_SLA_MPT_WRN
Use the same settings than for the shown actions *ESC, the only difference is in the Start condition that you need to use IRT_WRN and MPT_WRN that do not exists by default, for fixing this:
1. Open BAdI implementation AI_SDK_SLA_COND in t-code SE19.
2. Change to Edit mode and deactivate this BAdI implementation.
3. Add Filter Values “IRT_WRN” and “MPT_WRN”.
4. Save and activate the BAdI implementation.
Then, you will be able to select IRT_WRN / MPT_WRN from the start condition list.
11.2. Schedule SLA Escalation Background Job for triggering Email Notifications
Since SAP Solman 7.1 SP04:
/n SPRO -> SAP Solution Manager IMG -> SAP Solution Manager -> Capabilities (optional) -> Application Incident Management (service Desk) -> SLA Escalations -> Schedule Escalation Background Job
Schedule a job for report AI_CRM_PROCESS_SLA running every 10 minutes for example.
This job is updating the SLA data for the incidents setting the additional user statuses (IRT Exceeded/IRT Warning/MPT Exceeded/MPT Warning).
Note: It could happen that in sm_crm->Incident search the search result shows for example IRT warning at IRT Status text for an incident, however in the incident itself this additional status is not set. The search is making his own calculation. But the emails are only triggered when the status is really set by
this report in the incident document.
Before SAP Solman 7.1 SP04 you need to schedule report RSPPFPROCESS.
11.3. Email Notification
In case that all previous described configuration activities were performed probably, email notifications will be sent automatically by following IRT and MPT
status conditions:
– Warning
– Exceeded
A default email will be sent with following parameter:
– In case that IRT is impacted (incident status “Warning” or “Exceeded”):
- Subject: “Transaction: <Incident ID> First Response Exceeded”
- PDF attachment with the same file name like the subject
– In case that MPT is impacted (incident status “Warning” or “Exceeded”)
- Subject: “Transaction: <Incident ID> Completion Exceeded”
- PDF attachment with the same file name like the subject
Step12. Activate SLA Escalations
/nSPRO -> SAP Solution Manager IMG -> SAP Solution Manager -> Capabilities (optional) -> Application Incident Management (service Desk) -> SLA Escalations -> Activate SLA Escalations
In transaction DNO_CUST04 set the attribute SLA_ESCAL_ACTIVE to ‘X’
Related Content
Related Documentation
SLA Management guide
Related Notes
Always check the SLA notes relevant for your patch level and ensure that you have implemented the latest version of the notes.
Dolores,
Excellent work here. Many man(woman) hours saved for the SCN community.
Cheers!
Espectacular Dolores, muchas gracias por el documento ¡
Saludos,
Luis
Thanks for sharing Dolores.
One of my customers is testing it right now, we’ll go live this week (I hope). Everything seems to be ok. We are using the Factory Calendar and Service Product Investigation.
I had to do the additional steps that you explained to include the 2 new e-mail notifications: one for IRT Warning (IRT_WRN) and other for MPT Warning (MPT_WRN). I don’t understand why this is not included in the standard, if the standard configuration is prepared for WRN threshold. That was not included in the standard documentation (SLA guide) so it took me sometime to figure it out.
I only miss in the blog the step where we configure the thresholds: table AIC_CLOCKNAME. I needed to change the standard limits that are:
– Warning: 60%
– Exceeded: 100%
They need different values.
Best regards,
Raquel
HI Raqual
in the sm30 …enter table AIC_CLOCKNAME…you will standard entries like below
IRT_ESC IRT Escalation SRV_RFIRST 100
IRT_WRN IRT Warning SRV_RFIRST 60
MPT_ESC MPT Escalation SRV_RREADY 100
MPT_WRN MPT Warning SRV_RREADY 60
just update these entries as per your reqiurements
Hope this resolves
Regards
Prakhar
Hi,
Thank you for info but I was not asking, I was informing that this configuration was missing in the blog. I have already done that.
Regards,
Raquel
Thank you, Dolores. This is a good short-guide for SLA configuration.
Thank You for the concise guide on SLA Management. I recently configured SLA Management for one of our clients and we were looking for IRT/MPT Warning notifications. This helped us to understand the mechanish involved and configured the same.
I agree with Raquel Pereira da Cunha ; IRT/MPT Warning notifications should have been part of standard package.
Best Regards,
Vivek
Hi Dolores,
Excellent SLA technical document.
Hi Dolores,
It’s a very good guide.
I have a requirement for different service time on different priorities.
For example, a “Very High” priority message need 7X24 working hours and other priorities need only 5X8 working hours.
Can it be realized?
Best regards,
Wang Lue
Hi Wang,
I am sorry to say that not in the standard.
Best regards,
Dolores
Hi Dolores,
Thanks for your reply.
We are trying to do some enhancements to realize it.
Best regards,
Wang Lue
Hello Lue
I am trying to setup same scenario as you. ¿Could you advise how you are trying to achive this?
Thanks in advance.
Best regards.
Jorge
Hi Dolores
You mentioned that we can not configure escenario with a “Very High” priority message that needs 7X24 working hours and other priorities need only 5X8 working hours.
Do you know any way to achive this?
Thanks
Best Regards.
Jorge Luis Marquez
Hi Jorge,
If you need your own SLA determination check BAdI Implementation CRM_SLADET_BADI (IMG: Customer Relationship Management -> Transactions -> Settings for Service Requests -> Business Add-Ins -> Business Add-In for SLA Determination).
Best regards,
Dolores
Hello Dolores
Gracias for reply.
I am working in SM 7.1. Have you any example about coding this BADI.
Should I work with ABAP colleagues?
Best regards
Jorge Luis Marquez
Hi Dolores,
Thanks for such a nice blog on SLA configuration. I have configured SLA for one of a client and struggling with a requirement in which client want to have it.
We have two support group created in the system. 1st one is L1 PP / MM/ SD module and another one L2 PP/MM / SD etc.. for every module. Similarly multiple ticket statuses are already created. The SLA is applicable for New and In process status.
– Issue – Every ticket is initially assigned to L1 support group with New ticket status. L1 is handled by Client Team and if they want our (Service Provider) help then the support group will change to L2 but SLA clock will not reset as it is being measured based on status and Service Profiles. I have created separate priorities for Service Provider as the SLA’s reset in priority change, but i see the IRT and MPT are the same.
Request you to please suggest what configuration i have to do for reset the clock when Business Partner changed to L2 support in the Support team field.
Reagrds,
Sri Kanth
Hi,
Can any one please provide solution for this situation.
Regards,
Sri Kanth
Hello Dolores
I am newly configuring the solman.I can’t understand how the escalation will work.Is it needed to change the status of the incident to “IRT excedded” or “MPT excedded” before the escalation should work.I am confused in that part.I have already scheduled the background job but still can’t understand how the data will be picked by that job.I have found Item is also having action profile and all the actions are inactive.Please guide.
Regards
Rashmi ranjan behera
Hello Dolores,
Thanks for your blog! We have configured SLA Management within SAP Solution Manager
ITSM. Everything is working fine except the calculation and population of the SLA dates is not performed automatically when the message is created from for example a managed system.
Only when I change the priority or the service profile manually in the message within the CRM_UI the date calculation is performed and the dates and durations are populated.
We are on support pack 10. Do you know why the date calculation is not performed when a message is created from a managed system and the priotity is already assigned?
Thanks,
Guido Jacobs
Thanks for this great post. I have an issue, all my config is as you mentioned. The only thing is that the icon status and the percentage at the incidente are not changing, but the date calculations is working fine. In SM30-AIC_CLOCKNAME the entries are as suggested too. Any suggestions?. Thanks and best regards.
Hi Dolores,
thanks for your blog about SLA Escalation!
I have 1 additionally question about email notification..
Is it possible to send SLA Escalation email notification to Message processor and also to some special SLA Manager?
—
regards,
Yessen Sypatayev
Though the question is directed to Dolores, but I would like to reply to it with a yes, it is possible to do that.
Just that you have to have an action profile for the specific partner function + have the action condition based on the item escalation status.
Best Regards.
Thanks Jacob.
Now as workaround I solved my problem by this way:
—
best regards,
Yessen
Hi Yessen,
If you are still looking forward to notifying other parties, what you can do is link those partner functions to the support team (which is already determined in the transaction), that way you will be able to notify them based on the same conditions you maintained before.
Action profiles will be partner function dependent and there you go!
Thanks for your post. We have the following requirement:
If the notification receipt of the incident message is before 2 pm, set MPT (ToDo By) as Notification Receipt + 2 hrs.
Else the notification receipt of the incident message is after 2 pm, set MPT (ToDo By) as Notification Receipt + 1 day 2 hrs.Please advise how we can achieve.
Please also clarify the following:
In this Post,
1) A date profile “ZMIN_HEADER” is assigned to the transaction type ZMIN and it has its own date rule logic for IRT and MPT.
2) Status profile and responsible profile have been assigned and it has its own duration logic as per the priority.
3) Date Profile SMIN_ITEM has been assigned for status “New-E0001” under “Define settings for Duration” and it has its duration logic.
4) SLA Determination Procedure has been defined .
5) Status profile and responsible profile have been assigned to the item category.
In what sequence the IRT and MPT will be calculated and If the SLA Determination procedure is good enough then why we need to assign date profile under transaction type and in “Define settings for Duration” .
Thanks
Dear Dolores…
This material is very detailed and helpful, Congrat!
Patricia
Dear Dolores,
Interesting blog. For my requirement i’m missing something. I want to put a SLA on a status from the status profile. This means the transaction type can have a status like ‘On hold’ for only 30 days. After 30 days the user has to be warned about the breach.
Do you have any suggestions?
Thank you!
Stefan Melgert
Hola Dolores,
He seguido todos los pasos pero al momento de la primera reacción no se detiene el medidor de tiempo. Cada vez que el mensaje está en manos nuestras continúa sumando tiempo al indicador IRT.
¿qué puede estar pasando?
Espero me puedas ayudar, saludos.
|
https://blogs.sap.com/2013/09/16/incident-management-sla-configuration-hints-for-sap-solution-manager-71/
|
CC-MAIN-2020-45
|
refinedweb
| 3,968
| 51.18
|
# Do more with patterns in C# 8.0
Visual Studio 2019 Preview 2 is out! And with it, a couple more C# 8.0 features are ready for you to try. It’s mostly about pattern matching, though I’ll touch on a few other news and changes at the end.
[Original in Blog](https://blogs.msdn.microsoft.com/dotnet/2019/01/24/do-more-with-patterns-in-c-8-0/)
More patterns in more places
============================
When C# 7.0 introduced pattern matching we said that we expected to add *more* patterns in *more* places iin the future. That time has come! We’re adding what we call *recursive patterns*, as well as a more compact expression form of `switch` statements called (you guessed it!) *switch expressions*.
Here’s a simple C# 7.0 example of patterns to start us out:
```
class Point
{
public int X { get; }
public int Y { get; }
public Point(int x, int y) => (X, Y) = (x, y);
public void Deconstruct(out int x, out int y) => (x, y) = (X, Y);
}
static string Display(object o)
{
switch (o)
{
case Point p when p.X == 0 && p.Y == 0:
return "origin";
case Point p:
return $"({p.X}, {p.Y})";
default:
return "unknown";
}
}
```
Switch expressions
------------------
First, let’s observe that many `switch` statements really don’t do much interesting work within the `case` bodies. Often they all just produce a value, either by assigning it to a variable or by returning it (as above). In all those situations, the switch statement is frankly rather clunky. It feels like the 5-decades-old language feature it is, with lots of ceremony.
We decided it was time to add an expression form of `switch`. Here it is, applied to the above example:
```
static string Display(object o)
{
return o switch
{
Point p when p.X == 0 && p.Y == 0 => "origin",
Point p => $"({p.X}, {p.Y})",
_ => "unknown"
};
}
```
There are several things here that changed from switch statements. Let’s list them out:
* The `switch` keyword is «infix» between the tested value and the `{...}` list of cases. That makes it more compositional with other expressions, and also easier to tell apart visually from a switch statement.
* The `case` keyword and the `:` have been replaced with a lambda arrow `=>` for brevity.
* `default` has been replaced with the `_` discard pattern for brevity.
* The bodies are expressions! The result of the selected body becomes the result of the switch expression.
Since an expression needs to either have a value or throw an exception, a switch expression that reaches the end without a match will throw an exception. The compiler does a great job of warning you when this may be the case, but will not force you to end all switch expressions with a catch-all: you may know better!
Of course, since our `Display` method now consists of a single return statement, we can simplify it to be expression-bodied:
```
static string Display(object o) => o switch
{
Point p when p.X == 0 && p.Y == 0 => "origin",
Point p => $"({p.X}, {p.Y})",
_ => "unknown"
};
```
To be honest, I am not sure what formatting guidance we will give here, but it should be clear that this is a lot terser and clearer, especially because the brevity typically allows you to format the switch in a «tabular» fashion, as above, with patterns and bodies on the same line, and the `=>`s lined up under each other.
By the way, we plan to allow a trailing comma `,` after the last case in keeping with all the other «comma-separated lists in curly braces» in C#, but Preview 2 doesn’t yet allow that.
Property patterns
-----------------
Speaking of brevity, the patterns are all of a sudden becoming the heaviest elements of the switch expression above! Let’s do something about that.
Note that the switch expression uses the *type pattern* `Point p` (twice), as well as a `when` clause to add additional conditions for the first `case`.
In C# 8.0 we’re adding more optional elements to the type pattern, which allows the pattern itself to dig further into the value that’s being pattern matched. You can make it a *property pattern* by adding `{...}`‘s containing nested patterns to apply to the value’s accessible properties or fields. This let’s us rewrite the switch expression as follows:
```
static string Display(object o) => o switch
{
Point { X: 0, Y: 0 } p => "origin",
Point { X: var x, Y: var y } p => $"({x}, {y})",
_ => "unknown"
};
```
Both cases still check that `o` is a `Point`. The first case then applies the constant pattern `0` recursively to the `X` and `Y` properties of `p`, checking whether they have that value. Thus we can eliminate the `when` clause in this and many common cases.
The second case applies the `var` pattern to each of `X` and `Y`. Recall that the `var` pattern in C# 7.0 always succeeds, and simply declares a fresh variable to hold the value. Thus `x` and `y` get to contain the int values of `p.X` and `p.Y`.
We never use `p`, and can in fact omit it here:
```
Point { X: 0, Y: 0 } => "origin",
Point { X: var x, Y: var y } => $"({x}, {y})",
_ => "unknown"
```
One thing that remains true of all type patterns including property patterns, is that they require the value to be non-null. That opens the possibility of the «empty» property pattern `{}` being used as a compact «not-null» pattern. E.g. we could replace the fallback case with the following two cases:
```
{} => o.ToString(),
null => "null"
```
The `{}` deals with remaining nonnull objects, and `null` gets the nulls, so the switch is exhaustive and the compiler won’t complain about values falling through.
Positional patterns
-------------------
The property pattern didn’t exactly make the second `Point` case *shorter*, and doesn’t seem worth the trouble there, but there’s more that can be done.
Note that the `Point` class has a `Deconstruct` method, a so-called *deconstructor*. In C# 7.0, deconstructors allowed a value to be deconstructed on assignment, so that you could write e.g.:
```
(int x, int y) = GetPoint(); // split up the Point according to its deconstructor
```
C# 7.0 did not integrate deconstruction with patterns. That changes with *positional patterns* which are an additional way that we are extending type patterns in C# 8.0. If the matched type is a tuple type or has a deconstructor, we can use positional patterns as a compact way of applying recursive patterns without having to name properties:
```
static string Display(object o) => o switch
{
Point(0, 0) => "origin",
Point(var x, var y) => $"({x}, {y})",
_ => "unknown"
};
```
Once the object has been matched as a `Point`, the deconstructor is applied, and the nested patterns are applied to the resulting values.
Deconstructors aren’t always appropriate. They should only be added to types where it’s really clear which of the values is which. For a `Point` class, for instance, it’s safe and intuitive to assume that the first value is `X` and the second is `Y`, so the above switch expression is intuitive and easy to read.
Tuple patterns
--------------
A very useful special case of positional patterns is when they are applied to tuples. If a switch statement is applied to a tuple expression directly, we even allow the extra set of parentheses to be omitted, as in `switch (x, y, z)` instead of `switch ((x, y, z))`.
Tuple patterns are great for testing multiple pieces of input at the same time. Here is a simple implementation of a state machine:
```
static State ChangeState(State current, Transition transition, bool hasKey) =>
(current, transition) switch
{
(Opened, Close) => Closed,
(Closed, Open) => Opened,
(Closed, Lock) when hasKey => Locked,
(Locked, Unlock) when hasKey => Closed,
_ => throw new InvalidOperationException($"Invalid transition")
};
```
Of course we could opt to include `hasKey` in the switched-on tuple instead of using `when` clauses – it is really a matter of taste:
```
static State ChangeState(State current, Transition transition, bool hasKey) =>
(current, transition, hasKey) switch
{
(Opened, Close, _) => Closed,
(Closed, Open, _) => Opened,
(Closed, Lock, true) => Locked,
(Locked, Unlock, true) => Closed,
_ => throw new InvalidOperationException($"Invalid transition")
};
```
All in all I hope you can see that recursive patterns and switch expressions can lead to clearer and more declarative program logic.
Other C# 8.0 features in Preview 2
==================================
While the pattern features are the major ones to come online in VS 2019 Preview 2, There are a few smaller ones that I hope you will also find useful and fun. I won’t go into details here, but just give you a brief description of each.
Using declarations
------------------
In C#, `using` statements always cause a level of nesting, which can be highly annoying and hurt readability. For the simple cases where you just want a resource to be cleaned up at the end of a scope, you now have *using declarations* instead. Using declarations are simply local variable declarations with a `using` keyword in front, and their contents are disposed at the end of the current statement block. So instead of:
```
static void Main(string[] args)
{
using (var options = Parse(args))
{
if (options["verbose"]) { WriteLine("Logging..."); }
...
} // options disposed here
}
```
You can simply write
```
static void Main(string[] args)
{
using var options = Parse(args);
if (options["verbose"]) { WriteLine("Logging..."); }
} // options disposed here
```
Disposable ref structs
----------------------
Ref structs were introduced in C# 7.2, and this is not the place to reiterate their usefulness, but in return they come with some severe limitations, such as not being able to implement interfaces. Ref structs can now be disposable without implementing the `IDisposable` interface, simply by having a `Dispose` method in them.
Static local functions
----------------------
If you want to make sure your local function doesn’t incur the runtime costs associated with «capturing» (referencing) variables from the enclosing scope, you can declare it as `static`. Then the compiler will prevent reference of anything declared in enclosing functions – except other static local functions!
Changes since Preview 1
=======================
The main features of Preview 1 were nullable reference types and async streams. Both have evolved a bit in Preview 2, so if you’ve started using them, the following is good to be aware of.
Nullable reference types
------------------------
We’ve added more options to control nullable warnings both in source (through `#nullable` and `#pragma warning` directives) and at the project level. We also changed the project file opt-in to `enable`.
Async streams
-------------
We changed the shape of the `IAsyncEnumerable` interface the compiler expects! This brings the compiler out of sync with the interface provided in .NET Core 3.0 Preview 1, which can cause you some amount of trouble. However, .NET Core 3.0 Preview 2 is due out shortly, and that brings the interfaces back in sync.
Have at it!
===========
As always, we are keen for your feedback! Please play around with the new pattern features in particular. Do you run into brick walls? Is something annoying? What are some cool and useful scenarios you find for them? Hit the feedback button and let us know!
Happy hacking,
Mads Torgersen, design lead for C#
|
https://habr.com/ru/post/438256/
| null | null | 1,864
| 62.48
|
Download ===>
Download ===>
AutoCAD 2018 22.0 Crack + Product Key
Unlike desktop programs like AutoCAD Serial Key, mobile apps often run in the background while you continue to use other apps on your device. Mobile app AutoCAD Mobile has a limited feature set compared to desktop AutoCAD, though it’s still useful for small-scale architectural work. You can create basic geometric objects, including lines, circles, squares and polygons, using the device’s built-in digital pen, or by dragging and dropping. You can create and edit paths, and you can also use the app to draw with brushes and pens that correspond to other app tools. The app will detect and correct your pen strokes, which makes it a good choice for beginners and students.
For more advanced capabilities, you can connect to a desktop computer running desktop AutoCAD or a web-based version of desktop AutoCAD. You can also connect to mobile and web apps, so you can access the same desktop AutoCAD files on your laptop, iPad and other mobile devices.
Learn how to use AutoCAD Mobile. You can learn to use AutoCAD Mobile here: Use AutoCAD Mobile to design and view your drawings.
Use AutoCAD Mobile to design and view your drawings. Use AutoCAD Mobile to design and view your drawings.
Use AutoCAD Mobile to design and view your drawings.
Mobile apps provide many of the same basic drawing tools as their desktop counterparts. In fact, AutoCAD Mobile is basically a copy of the desktop version of AutoCAD, including the same licensing and features. You can use AutoCAD Mobile on both desktops and mobile devices, such as an iPad. You can also connect to a desktop version of AutoCAD running on a computer, laptop or desktop computer, tablet, or mobile device. If you connect to a desktop version of AutoCAD on a different computer, mobile device or tablet, you can edit and save your files in the desktop version. Then, when you open a drawing in the mobile version, it will be connected to the desktop version.
Mobile apps use an Android- or iOS-style interface. The Windows, Mac and Linux versions of AutoCAD use a traditional Microsoft Windows-style interface. All three mobile apps, plus the web-based version of desktop AutoCAD, use a traditional desktop Windows-style interface. However, all of the mobile apps also have a menu on the top-right corner of the screen that appears when you mouse over the menu
AutoCAD 2018 22.0 For Windows [2022]
Formats
AutoCAD 2022 Crack supports several file formats, including the following:
2D drawing formats AutoCAD Drawing Standard and AutoCAD Drawing Exchange (DXF) are the standard drawing format. AutoCAD can import and export in DXF format, thus allowing it to be used as a file format.
File formats AutoCAD offers several built-in file formats, including the following:
Text file formats Line, text, paragraph, and table styles can be applied to text to add information such as fonts, colors, point sizes, etc. Text can be exported as an image of the text if the image of the text is not specified in a drawing. If a text style is applied to a text object, the text in that object can also be exported as an image. Text styles can also be applied to text in other drawings to maintain the same style across the drawings.
File formats Several file formats can be opened, including the following:
Image formats Windows image files, TIFF, Portable Network Graphics (PNG), JPEG, and PostScript are supported. They can be saved in EPS format, with the resolution being a user-specified number.
File formats A number of file formats can be opened, including the following:
Scripts AutoCAD supports VBA and AutoLISP scripts, which can automate certain tasks in AutoCAD.
File formats A number of file formats can be opened, including the following:
SQL Server databases AutoCAD can also connect to SQL Server databases via ODBC or ADO interfaces. The database schema and data can be used in a drawing.
File formats Several file formats can be opened, including the following:
CAD standards Several standards have been developed by ANSI/ISO to govern the representation of AutoCAD. CAD standards are used by CAD system suppliers to drive interoperability with other CAD systems and maintain compatibility.
File formats There are multiple file formats to represent a vector drawing, including the following:
Vector objects These contain line information only, such as lines, arcs, and circles. These objects can be hidden, removed, or replaced with other objects.
File formats Several file formats can be opened, including the following:
AutoCAD variants AutoCAD variants are versions of AutoCAD with additional features. AutoCAD 2000 was the first version with more than one release.
File formats Several file formats can be opened, including
3813325f96
AutoCAD 2018 22.0 Crack+ Full Version Free Download
Run CadServer.exe.
Go to the Server tab.
Click Configure for Autocad.
Click Generate a License Key.
Batch scripts
Scripts need to be in the same folder.
*.bat files will run the following command: “CadServer.exe” /autocad /localhost:1234 /username:autocad /password:123123″.
*.sh files will run the following command: “CadServer.exe” /autocad /localhost:1234 /username:autocad /password:123123″.
Command:
C:autocadcadserver.exe /autocad /localhost:1234 /username:autocad /password:123123
Parameters:
autocad
localhost:1234
username:autocad
password:123123
Using FTP
List the IPs and port numbers of the Windows 2000 server and assign to them the correct credentials.
Change the DefaultConnection property of the FTP server to the IP address of the Windows 2000 server.
Assign the desired credentials for the FTP server.
See also
3D modeling software
List of 3D modeling software
Comparison of CAD editors
Comparison of CAE software
Comparison of CAM software
Comparison of Computer-aided manufacturing software
References
External links
Autodesk’s official Autocad news site
Autocad Tips
Autocad Tips and Tricks – YouTube
Category:Computer-aided design software
Category:Computer-aided manufacturing software
Category:Products introduced in 1985
Category:Technical communication tools
Category:Windows-only software
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
namespace SymfonyComponentVarDumperCloner;
/**
* Interface for objects cloning.
*
* @author Nicolas Grekas
*/
interface ClonerInterface
{
/**
* Clones an object and return it.
*
* @param object
What’s New in the?
Print-ready AutoCAD helps you assemble multiple parts and sections of your drawings into a single print-ready file, but you can also create a single file for each component. And for the first time, you can draw a section of a print-ready drawing without having to open that drawing. This allows you to draw directly in your print-ready drawing and stay connected to the changes you make. The ability to draw a print-ready section helps you to do more design work in a single drawing session.
Use Markup Assist to send a feedback comment or line drawing back to your drawings from most software that you import or create in. AutoCAD’s Markup Assist functionality helps you to incorporate your feedback into your drawings much faster and easier.
Improved 2D Projection Creation:
Create large 2D projections in 3D models more easily than ever before. Create a large 2D view of a 3D model with a single click.
Create a 2D view of a 3D model. (video: 1:07 min.)
Create and display 2D views of 3D models. (video: 1:19 min.)
Create a printable 2D view of a 3D model. (video: 1:10 min.)
New 2D Text Box:
Create 2D boxes that you can plot into as if they were 2D views. Generate coordinates for any point on the drawing. Easily plot lines, circles, and text to these coordinates.
New 4D solids:
Create a 2D view of 3D solids, which are now a 4D object.
New Guide Panel for Drawings:
The new panel quickly shows the location of the active 3D drawing space. You can also select a new display type, and set the position of the guide panel to the coordinate system that you are drawing in.
New display types:
New “Show Reflection” display type shows the reflected image of the 3D drawing.
New “Hide Guide Panel” display type hides the panel that displays the location of the active drawing space.
New pane indicator:
New switch pane indicator for the drawing space:
New option to set the location of the reference plane in 3D:
New “Settings” button in the Preferences dialog box.
System Requirements For AutoCAD:
Windows XP or later.
Internet Explorer 10.
Internet Explorer 8 or later.
Mozilla Firefox.
Firefox (1.0.7 or later).
Safari.
Opera.
Netscape (7.2 or later).
Opera is not officially supported. Please provide a report if it works well.
The Google Chrome browser.
The Netscape browser.
The Flash Player 10.0 or later is required for
|
https://haitiliberte.com/advert/autodesk-autocad-2018-22-0-civil-3d-cracked-activation-code-with-keygen-download-for-windows/
|
CC-MAIN-2022-27
|
refinedweb
| 1,455
| 55.84
|
Autom. I know people who totally automated their NSX-V deployments using it (see also the PowerNSX part of our free PowerShell for Networking Engineers webinar).
I never looked into the details of NSX-T API, but NSX-V (and everything else vSphere-related) seems to use SOAP (XML-based) REST API, and NSX-T is probably no different. While it’s possible to interface with that API in any programming language, dealing with XML and its namespaces quickly becomes a major pain, so it’s much easier if someone already wrote the wrappers that provide high-level functionality like Anthony Burke and Nick Bradford did for NSX-V.
Last year VMware release PowerShell cmdlets for NSX-T, so that would be the natural way to go.
What do you think I should do? Start trying to reinvent my wheel that could help me to get more skills in Python and scratch my head? Or not reinvent the wheel but still scratch my head and stick with PowerShell?
The more programming environments you know the easier it will be to switch to a new one (trust me, started with COBOL, FORTRAN, Pascal, Lisp and Prolog back in the days, and have to deal with Perl, PHP, Python, JavaScript, CSS and a bit of Bash and Ruby these days), so the best thing to do if you want to automate NSX seems to be to invest some time in learning PowerShell.
multiple frameworks / tools already exist, such as:
- NSX-T for Java:
- NSX-T for Python:
- NSX-T Terraform Provider:
- PowerCLI:
Also, the NSX-T API is described in an OpenAPI format, and as such you can download easily as a Postman Collection (here:).
I hope this can help.
Romain
In my experience you can use any of those but the more flexible (not necessarily the most easy) approach is to do dev against the API (and the OpenAPI that NSX-T provides is AMAZING) instead of being tied to the SDKs, but as you also comment is good to "not reinvent the wheel", meaning that you should be aware of the tools and their scope and just develop the things that are missing or complex chaining of them to fulfill your use case.
my 2c
KR,
AL
|
https://blog.ipspace.net/2019/03/automating-nsx-t.html
|
CC-MAIN-2020-29
|
refinedweb
| 377
| 57.84
|
Once you have the SPI interface set up, you can send and receive data using the SPI_IOC_MESSAGE request. This is slightly different from other requests in that a macro is used to construct a request that also specifies the number of operations needed. Each operation is defined by a struct:
struct spi_ioc_transfer {
__u64 tx_buf;
__u64 rx_buf;
__u32 len;
__u32 speed_hz;
__u16 delay_usecs;
__u8 bits_per_word;
__u8 cs_change;
}
The fields are fairly obvious. The tx_buf and rx_buf are byte arrays used for the transmitted and received data – they can be the same array. The len field specifies the number of bytes in each array. The speed_hz field modifies the SPI clock. The delay_usecs field sets a delay before the chip select is deselected after the transfer. The cs_change field is true if you want the chip select to be deselected between each transfer. The best way to find out how this all works is to write the simplest possible example.
A Loopback Example
Because of the way that data is transferred on the SPI bus, it is very easy to test that everything is working without having to add any components. All you have to do is connect MOSI to MISO so that anything sent is also received in a loopback mode. There is an official example program to implement a loopback, but it is complicated for a first example and has a bug. Our version will be the simplest possible and, hopefully, without bugs.
First connect pin 19 to pin 21 using a jumper wire and start a new project. The program is very simple. First we check that the SPI bus is loaded:
checkSPI0();
and next we open spdev0.0:
int fd = open("/dev/spidev0.0", O_RDWR);
As this is a loopback test we really don't need to configure the bus as all that matters is that the transmit and receive channels have the same configuration. However, we do need some data to send:
uint8_t tx[] = {0xAA};
uint8_t rx[] = {0};
The hex value AA is useful in testing because it generates the bit sequence 10101010, which is easy to see on a logic analyzer.
To send the data we need an spi_ioc_transfer struct:
struct spi_ioc_transfer tr =
{
.tx_buf = (unsigned long)tx,
.rx_buf = (unsigned long)rx,
.len = 1,
.delay_usecs = 0,
.speed_hz = 500000,
.bits_per_word = 8,
};
We can now use the ioctl call to send and receive the data:
int status = ioctl(fd, SPI_IOC_MESSAGE(1), &tr);
if (status < 0)
printf("can't send data");
Finally we can check that the send and received data match and close the file.
Putting all of this together gives us the complete program:
#define _DEFAULT_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <linux/spi/spidev.h>
#include <stdint.h>
int main(int argc, char **argv)
{
checkSPI0();
uint8_t tx[] = {0xAA};
uint8_t rx[] = {0};
struct spi_ioc_transfer tr =
{
.tx_buf = (unsigned long)tx,
.rx_buf = (unsigned long)rx,
.len = 1,
.delay_usecs = 0,
.speed_hz = 500000,
.bits_per_word = 8,
};
int fd = open("/dev/spidev0.0", O_RDWR);
int status = ioctl(fd, SPI_IOC_MESSAGE(1), &tr);
if (status < 0)
printf("can't send data");
printf("%X,%X", tx[0],rx[0]);
close(fd);
}
Note: The checkSPI0 function needs to be added to this listing.
If you run the program and don't get any data, or receive the wrong data, then the most likely reason is that you have connected the wrong two pins, or not connected them at all. If you connect a logic analyzer to the four pins involved – 19, 21, 23 and 24 you will see the data transfer:
If you look carefully you will see the CS0 line go low before the master places the first data bit on the MOSI, and hence on the MISO, line. Notice that the clock rises in the middle of each data bit, making this a mode 0 transfer.
If you need to configure the SPI interface you can use the ioctl calls. For example:
static uint8_t mode = 1;
int ret = ioctl(fd, SPI_IOC_WR_MODE, &mode);
if (ret == -1)
printf("can't set spi mode");
static uint8_t bits = 8;
ret = ioctl(fd, SPI_IOC_WR_BITS_PER_WORD, &bits);
if (ret == -1)
printf("can't set bits per word");
static uint32_t speed = 500000;
ret = ioctl(fd, SPI_IOC_WR_MAX_SPEED_HZ, &speed);
if (ret == -1)
printf("can't set max speed hz");
After this you should see mode 1 selected and the clock going high at the start of each bit.
<ASIN:B08YXJ743K>
<ASIN:B01HGCSGXM>
|
https://i-programmer.info/programming/hardware/14609-pi-iot-in-c-using-linux-drivers-the-spi-driver.html?start=1
|
CC-MAIN-2021-25
|
refinedweb
| 740
| 71.34
|
Hey, I read the FAQs, and as I will post what I have, I did try to do this code, but I am having some trouble with it. If someone could help, I'd be extremely grateful. I think I have majority of it done, but, I might be wrong. What I need to is here:
Write a program that determines which of 4 geographic regions within a major city (north, south,
east, and west) had the fewest reported traffic accidents last year. It should have the following
two functions, which are called by main.
The first function, getNumAccidents, is passed the name of a region. It asks the user for
the number of traffic accidents reported in that region during the last year and returns that
integer. It should be called once for each city region.
The second, findLowest, is passed the four accident totals. It determines which is the
smallest and prints the name of the region, along with its accident figure.
This is what I've done so far:
import javax.swing.JOptionPane;
public class douglassProgramFour {
public static void getNumAccidents(String[] locations)
{
int [] accidents=new int [4];
for(int counter=0;counter<accidents.length;counter++)
{
String input=JOptionPane.showInputDialog("How many accidents were in the " + (locations) + " district last year?");
accidents[counter]=Integer.parseInt(input);
System.out.println(accidents);
}
}
public static void findLowest(int accidents)
{
int lowest = 1000;
if(accidents < lowest)
{
lowest = accidents;
}
System.out.println(lowest);
}
public static void main(String[] args)
{
String [] locations={"East","South","North","West"};
getNumAccidents(locations);
findLowest(0);
}
}
Again, if I can get any help, I'd be very thankful.
Your method signatures for getNumAccidents and findLowest don't match the description given - getNumAccidents should take one string name for a region and then ask the user for the number of accidents in that region. The problem says the method should be called for all four regions, so you should make a loop in the main method which calls it once for each location.
Meanwhile, findLowest should be taking four integers as arguments rather than one and then looping through them to find which is lowest. Your logic for this is roughly right, but is only returning the minimum of the one input you're passing in which isn't much use
Also, getNumAccidents is supposed to return the number the user enters rather than printing it - that way, your main method can record the values it gets for each region and pass them in to the findLowest method afterwards
Forum Rules
|
http://forums.codeguru.com/showthread.php?536195-Code-Help&goto=nextnewest
|
CC-MAIN-2015-22
|
refinedweb
| 419
| 60.55
|
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
hi team,
One user's account has been marked deleted in Crowd which is for our JIRA SSO. And the user could not log into JIRA any more.
Now we want to query out all dashboards and filters which this user subscribed in JIRA and migrate them to new JIRA account.
Is that possible to do migration through UI?
Thanks!
Jay
Changing the owner of a private filter is not possible. You could use the Script Runner plugin and the following script to copy the filters from the old user to another user.
import com.atlassian.jira.issue.search.SearchRequest import com.atlassian.jira.util.SimpleErrorCollection import com.atlassian.jira.bc.JiraServiceContextImpl oldUser = 'olduser' newUser = 'newuser' result = '' userUtil = componentManager.getUserUtil() jsc = new JiraServiceContextImpl(userUtil.getUser(newUser),new SimpleErrorCollection(),componentManager.jiraAuthenticationContext.i18nHelper) srs = componentManager.getSearchRequestService() filters = srs.getOwnedFilters(userUtil.getUser(oldUser)) result += "Found ${filters.size()} filters for user $oldUser.\n" filters.each { newFilter = new SearchRequest(it.query, newUser, it.name, it.description) srs.validateFilterForCreate(jsc,newFilter) if (jsc.errorCollection.errorMessages.size()==0) { sr = srs.createFilter(jsc,newFilter) if (sr) { result += "Created filter ${srs.createFilter(jsc,newFilter)?.name} for $newUser.\n" } else { result += "Filter could not be created: ${jsc.errorCollection.errors}\n" } } else { result += "Filter creation validation failed: ${jsc.errorCollection.errors}\n" } } result
Henning
Hi Henning,
We asked the user to re-create his private filters and dashboards. Anyway, thanks for your help! Your script might be a good workarround. I saved it, and believe it will be helpful for the next time.
Thanks!
Jay
Also see . Given a dashboard, it will also change ownership of any filters that are used in that dashboard, if they are not visible to the person you are updating it to. Can multiselect filters and dashboards too.
Jamie,
Good to know the script there! It seems it's not possible to handle with private filter. And private filter seems could not be retrieved by JIRA administrators
Thanks!
Jay
Hi Jay,
The owner of the dashboard or filter can't be modified in the UI. If the dashboard or filter is accessible by the new account, then the user can copy those and use the copy instead
Ok! We have to ask user to re-subscribe/re-create filters.
Actually the enhancement here could help us, but unfortunately, it's still
unimplemented.
Anyway, thanks Janet!
It depends on the JIRA version you are using. In 5.1.8 it's possible to change the owner of a dashboard or filter through the corresponding menu items "Shared Filters" and "Shared Dashboards" under the "Users" administration menu. If you want to search for the filters or dashboards of the deactivated user you have to enter the username manually because the user is not in the list anymore.
Henning
Henning, unfortunately, we are using JIRA 5.0.6 and trying to migrate some private filters...I found private filter even could not be quired out by JIRA system.
|
https://community.atlassian.com/t5/Jira-questions/Is-that-possible-to-migrate-dashboard-filter-from-one-user-to/qaq-p/150174
|
CC-MAIN-2018-09
|
refinedweb
| 510
| 52.87
|
This article explains how to parse a config file in the form name=value similar to the windows .ini files. The code removes all whitespaces from the lines and skips empty lines and lines containing comments.
The code explained in this article can parse a file in this form:# config parameters of an example file generates_output=true file_format=txt
but also in this disordered form:# config parameters of an example file generates_output = false file_format=doc
The code below parses the config files. After we successfully load the file, we read it line by line. We remove all the whitespaces from the line and skip the line if it is empty or contains a comment (indicated by "#"). After that, we split the string "name=value" at the delimiter "=" and print the name and the value.
#include<iostream> #include<fstream> #include<algorithm> int main() { std::ifstream cFile ("config2.txt"); if (cFile.is_open()) { std::string line; while(getline(cFile, line)){ line.erase(remove_if(line.begin(), line.end(), isspace), line.end()); if(line[0] == '#' || line.empty()) continue; auto delimiterPos = line.find("="); auto name = line.substr(0, delimiterPos); auto value = line.substr(delimiterPos + 1); std::cout << name << " " << value << '\n'; } cFile.close(); } else std::cout << "Unable to open config file." << '\n'; return 0; }
Warning: If you added using namespace std at the top of your source code, you have to use ::isspace rather than isspace. This is because we want to use ::isspace from the global namespace (indicated by the scope operator ::) rather than std::isspace from what would be the current namespace (std).line.erase(remove_if(line.begin(), line.end(), ::isspace), line.end());
The following paragraphs explain the details of the implementation.
Before doing any further processing we remove all the whitespaces from the line. This is accomplished with the help of several functions, namely erase(), remove_if() and isspace.
line.erase(remove_if(line.begin(), line.end(), isspace), line.end());
The function remove_if() takes a sequence (i.e. the line) and transforms it into a sequence without the undesired characters (i.e. the whitespaces). The length of the sequence does not get altered, however the elements representing the undesired characters are moved to the end of the sequence and remain in an unspecified state. The function returns an iterator to the new end of the sequence. This is illustrated below. remove_if() takes three arguments. The first two arguments are forward iterators to the initial and final positions in the sequence. The last argument is a function pointer or a function object (in our case the address of the function isspace).
line.erase(remove_if(line.begin(), line.end(), isspace), line.end());
The function isspace checks whether an individual character is a whitespace character. Behind the scenes, isspace accepts a single element of the sequence as an argument and returns a value convertible to bool, i.e. true or false.
line.erase(remove_if(line.begin(), line.end(), isspace), line.end());
Note: In "C" locale whitespace characters include the space (’ ’), form feed (’\f’), line feed (’\n’), carriage return (’\r’), horizontal tab (’\t’), and vertical tab (’\v’). The backspace character (’\b’) is not a whitespace character in "C" locale. Different locales might define other whitespace characters. From C++ In a Nutshell: A Desktop Quick Reference By Ray Lischner.
The method std::string::erase() erases the sequence of characters in the range (first, last). In our case, it removes the part between the new end of the sequence returned by remove_if and the original end of the sequence. This can be seen in the figure below.
line.erase(remove_if(line.begin(), line.end(), isspace), line.end());
The code below splits the line at the delimiter =. Notice, that we are processing the line under the assumption that there are no whitespace characters, as they have all been removed in the previous step. We firstly find the position of the delimiter = with std::string::find(). After that, we use the method std::string::substr(pos, len) to extract the name and the value. The method substr(pos,len) creates a substring starting at the position pos and spans len characters (or until the end of the string, whichever comes first). Notice, that in the second case we pass only the first parameter, i.e. the position. The default value, i.e. all characters until the end of the string, is used as the second parameter.
auto delimiterPos = line.find("="); auto name = line.substr(0, delimiterPos); auto value = line.substr(delimiterPos + 1);
|
http://walletfox.com/course/parseconfigfile.php
|
CC-MAIN-2017-39
|
refinedweb
| 738
| 59.4
|
Agenda
See also: IRC log
<scribe> ACTION: CS to remove BenToWeb copyright notices from test files [recorded in]
<scribe> ACTION: CS to add documentation as to why certain test files do not validate [recorded in]
<scribe> ACTION: CS to copy video files to correct location in CVS [recorded in]
CS: Other issues?
CI: copyright notices, name of test files,
directory structure, id rules broken?
... incomplete specification (changes made)?
<scribe> ACTION: CS to fix references to rulesets.xml [recorded in]
CI: use of xpath expressions?
CS: empty namespace prefix - confusing?
<scribe> ACTION: CS to fix namespace prefixes in xpath expressions [recorded in]
CI: metadata consistency?
<scribe> ACTION: CS to check that TCDL files use only Task Force metadata [recorded in]
CI: comments about content
... user testing issue (expert guidance)?
<carlosI> <expertGuidance><p...
<carlosI> ...a form presented on a website, it might be necessary to validate the design by the target user group. </p></expertGuidance>
SAZ: info should be in WCAG techs or
understanding docs - not for test samples?
... ordinarily according to process should send back to author - not accepted, please rewrite expert guidance?
... should mark in wiki for updating
CI: can be more flexible now, since test samples now are from TF members
<scribe> ACTION: CI to send summary of encountered issues to mailing list and CS to relay internally within the BenToWeb project [recorded in]
SAZ: attempt telecon next week
|
http://www.w3.org/2007/12/04-tsdtf-minutes.html
|
CC-MAIN-2014-41
|
refinedweb
| 230
| 53.71
|
Reimplementation of QComboBox showPopup() works only once
We are trying to reimplement QComboBox::showPopup(). The code below sets selected index and hidePopup is called(). It works only once per combobox instance. Consecutive clicks on the same combobox do not invoke showPopup(). It seems that call to hidePopup() has no effect and internal state of combobox is not reset...
@
#include <QWebView>
#include <QApplication>
#include <QComboBox>
#include <QDebug>
void QComboBox::showPopup()
{
qDebug()<<"QComboBox::showPopup()";
emit activated(3);
hidePopup();
}
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QWebView webView;
webView.setGeometry(0,0,800,600);
webView.load(QUrl(""));
webView.show();
return a.exec();
}
@
[andreyc EDIT: added @ around the code]
You need to derive your own class from QComboBox if you need to re-implement its member functions.
In your example QComboBox::showPopup() is never called.
Thanks for your replying.
My showPopup() is called. Because the message, "QComboBox::showPopup() ", is shown.
What I want to do is to create a widget in the reimplemented showPopup().
Frankly speaking I don't see from your example how QComboBox::showPopup() can be called.
Did you modify QComboBox class?
[quote]What I want to do is to create a widget in the reimplemented showPopup().[/quote]You need to define a widget class manually or in designer and then create it in the function.
Linux does allow us to override these two methods (showPopup, hidePopup) directly.
Could you try the code I posted.
|
https://forum.qt.io/topic/41950/reimplementation-of-qcombobox-showpopup-works-only-once
|
CC-MAIN-2017-39
|
refinedweb
| 236
| 51.65
|
Windows.Forms + GL Control based applicationPosted Thursday, 3 January, 2013 - 14:35 by jjspierx in
Hello everybody, I am brand new to OpenTK and trying to get myself started. I am wanting to create a GLControl window within a Windows.Form. I have been reading through the tutorials and have gotten a game window demonstration to work, but I am stuck on getting a GLControl window working. I am programming in C# using Visual Studio 2012. I tried using the following tutorial.
I am able to understand the tutorial, and can even compile the tutorial without any problems. However, the GLControl is just a white window within my Windows form. I have verified that the glControl1_Load event is firing, but nothing appears in the GLControl window. Even trying to change the background color of the window has no effect.
Does anybody have any ideas for me on what I may be doing wrong? The tutorial was written using Visual 2005, maybe things have changed with VS, C# or OpenTK since the tutorial was written?
Thanks in advance for any help.
Re: Windows.Forms + GL Control based application
Just a little update. I can control the size property of the GLControl window and see that it changes size when the run the program, but the control is always blank (white). I was able to follow another tutorial that used a panel control as a host for GL Control, but I was only able to get it to work using VB.net. Once I got it working in VB.net, I rewrote it to run in C#, but again the same thing happens, it is like the screen never refreshes, or invalidates. The tutorial I used that worked with VB.net, but not with C# is here:
There has to be something simple that I missing, something different about the way vb.net and C# need to be instructed to refresh the GLControl. I assumed .invalidate() would do the trick as shown in the examples, but alas, it does not seem to work.
Thanks in advance for any advice or suggestions.
Re: Windows.Forms + GL Control based application
Another update...
I notice the vb.net code uses "Import OpenTK.GLControl", however I cannot use what I assume to be the equivalent in C# "Using OpenTK.GLControl". First off, VS2012 does not show GLControl as namespace to use, and if I type it in anyway, I get a compile time error same OpenTK.GLControl is a type and not namespace. I am a mechanical engineer and not an experienced programmer, so I am unclear what this means, and wonder if this is the root of my problem? When I remove the "using OpenTK.GLControl;" the program will still compile just fine, but the GLControl window is always blank.
I have tried the GLControl examples do work, I verified that, I just can't compile my own programs and get them to work.
Re: Windows.Forms + GL Control based application
Hi jjspierx,
I can't really tell very much from your description... all the tutorials you posted compile and run fine for me.
Since you say it compiles fine, I'm guessing it has to do with the Paint call. Double check that all your code matches the tutorials.
OpenGL (and OpenTK, by extension) render independently of the rest of the window. When you draw in OpenGL, you usually draw to a buffer in memory, which is then drawn on screen all at once to reduce flickering.
glControl1.SwapBuffers();is the line of code in this case that draws everything in the OpenGL buffer to the screen. You'll need to put this line after all your drawing calls. In the C# tutorial, this is placed in your glControl1_Paint method.
A call to
forces a call to the glControl1_Paint method, which in this case calls SwapBuffers(), but this isn't always the case, so don't get them confused.
As for your "using" problem, C# only allows you to "import" namespaces, unlike VB.NET. If you need access to OpenTK.GLControl, just add
using OpenTK;. Since you compile fine without it though, it's not strictly necessary.
Re: Windows.Forms + GL Control based application
Thanks for the response Aestivae. I never actually figured out what the problem was, but I know the code matched what was in the tutorials. I suspect what was happening is since I was pasting in the "events" their was no code being generated in the designer. Either way, I moved on was able to get the 3D cube glControl sample to work and that was more in line with what I was needing anyway. After getting the 3D cube example to work, I modified it to take in data from a gyro to move the cube with the gyro's pitch/roll and yaw. Pretty cool stuff. Thanks again for responding. I am enjoying learning OpenTK and appreciate such a helpful community.
Re: Windows.Forms + GL Control based application
Glad to hear it worked out.
For future reference, in C# (I don't know about VB.NET) you usually need to manually attach the event handlers using code like:
glControl1.Paint += new EventHandler(glControl1_Paint);
Visual Studio's Intellisense generally helps a lot to figure out which events to attach. If you double click a control in the designer, it adds a line like this (usually the Click event) to the designer-side code, but I personally prefer to add handlers manually, since it gives more control.
Good luck :)
|
http://www.opentk.com/node/3250
|
CC-MAIN-2015-22
|
refinedweb
| 919
| 64.71
|
From: Anthony Williams (anthony_w.geo_at_[hidden])
Date: 2008-01-17 08:43:57
Tobias Schwinger <tschwinger <at> isonews2.com> writes:
>
> Anthony Williams wrote:
> > Tobias Schwinger <tschwinger <at> isonews2.com> writes:
> >> Anthony Williams wrote:
> >> Having a framework internally use some Singletons can greatly simplify
> >> its use.
> >
> > Possibly, but I don't think they're needed even then. If they're an
> > implementation detail of the framework, you don't need to make things a
> > singleton in order to ensure there is only one instance --- just create one.
>
> What's so different between "using a Singleton" and "just creating one"?
A singleton enforces that there is only one global instance with global
accessibility. If you just create one (and pass it down by dependency injection)
then the decision to just create one is made at the top level (where it
belongs), and the client code doesn't have to be fussed with how many instances
there are --- just that it can use the one provided.
void some_function(Logger& logger)
{
logger.log("something");
}
int main()
{
Logger myLogger; // ooh look, just one
some_function(myLogger);
}
vs
void some_function()
{
singleton<Logger>::instance()->log("something");
}
int main()
{
some_function();
}
> In fact, "just creating one" should be exactly what this library is all
> about .
Singleton is also called the Highlander pattern --- there can be only one.
> >
> >> Exposing a singleton to a user provides more flexibility than
> >> exposing a static interface (and can also improve performance).
> >
> > I don't see how. You can easily rewrite a static interface to use a
> > singleton
> > internally. Having a framework provide the user with a (reference-counted)
> > pointer to an interface suitably hides all the details.
>
> Yes, rewriting the code that uses the interface to use a non-static one
> seems less trivial of a task, however.
shared_ptr<Logger> getLogger();
void some_function()
{
getLogger()->log("something");
}
Client code doesn't have to know how many loggers there are. You could also pass
in a context object or factory:
void some_function(Context& context)
{
context.getLogger()->log("something");
}
> >
> >> A "tendency towards overuse" is not a good reason to reject a library,
> >> as it won't stop overuse and encourages more half-baked solutions that
> >> are written in a hurry.
> >
> > It is better to educate people in better ways of doing things (and provide
> > tools to make those things easy) than enable them to easily do something
> > that's generally a bad idea.
>
> Without promoting Singletons people will use globals. And they will use
> more globals than they would use Singletons (especially in C++ because
> without we can't be sure initialization code is run if we're inside a
> dynamic library, so we probably end up using POD typed globals).
Dependency injection is my preferred means, rather than globals or singletons.
It has the side effect that you can decide on an initialization order that best
suits your uses.
> >>>> * What is your evaluation of the design?
> >>>> * What is your evaluation of the implementation?
> >>> The design mixes several independent issues --- ensuring there is only one
> >>> instance of a class and avoiding initialization order problems with
> >>> on-demand
> >>> initialization for starters.
> >>>
> >>> A simple wrapper class that allows for on-demand initialization, would be
> >>> useful. Conflating that with "there shall be at most one instance" is not.
> >>>
> >>> Again, allowing for a preferred destruction sequence of objects with such
> >>> on-demand initialization might also be useful, but does not belong with
> >>> the
> >>> one-instance constraint.
> >> What is the point in managing construction order without static context?
> >
> > Sometimes there are uses for globals. In those cases, it is important to
> > manage the construction and destruction order, independent of how many
> > instances of any given class there may be.
>
> OK. With a Singleton, the type serves as the key to access the global
> which I personally find quite elegant.
and which I find to be the biggest problem with singletons.
> A singleton can easily hold several instances of any other class, if a
> container is put into it.
>
> Further you can use multiple-inheritance to instantiate many singletons
> with a common subobject at compile time.
>
> So what are the alternatives (concrete interface proposals, please)?
One of the benefits you cite for your singleton design is the
construction-on-demand, which ensures that things are always constructed when
required. In that case, how about providing a class:
template<typename T>
class construct_on_demand;
so you can create instances like:
construct_on_demand<Logger> myLogger;
and any call to myLogger->log() will ensure that the object is constructed
before first use.
The appropriate operator-> call can also manage destruction order if required.
Alternatively, if you just pass in a factory or context object, as in my example
above, the getLogger() function can ensure the logger object is constructed
before use.
> >
> >> What is the point of having more than one instance of a class that lives
> >> in static context -- and how would it be captured syntacticly?
> >
> > I can imagine a use for having lots of instances of boost::mutex at global
> > scope --- each mutex protects a different thing. You don't need to capture
> > the
> > syntactic idea "there may be more than one of this class" --- that is the
> > base
> > level assumption.
> >
>
> Interestingly, you show us that in this very case the things you try to
> protect should be 'mutexed_singleton'S to avoid namespace pollution and
> to add clarity .
That was the first thought off the top of my head. It may be that a Locked<T>
template (such as that mentioned by Phil Endecott) would be a good idea, but
even then you've got multiple instances of the Locked<> template at global
scope. My point is that just because something is a global doesn't mean you only
want one instance of that *class*.
Anthony
Boost list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
http://lists.boost.org/Archives/boost/2008/01/132463.php
|
crawl-001
|
refinedweb
| 975
| 54.63
|
A Detailed Look at What’s New in Commerce Server 10
Now that the Commerce Server 10 Release Preview (CS10) has been available for a few weeks, we wanted to summarize all of the great new features now available to customers, as well as celebrate the fact that this is the first version of Commerce Server delivered in months versus years!
A New Approach to Building Commerce Server Web Sites
One of the largest pain points we have observed when talking to over a hundred Commerce Server customers and partners over the last year, was the amount of effort that goes into building and maintaining Web sites. We were reminded over and over again that a .NET Commerce Server-literate developer was needed for even basic “look-and-feel” changes and there were many circumstances where full codebase recompilations and redeployments were needed (with, of course, a lot of testing).
We knew that there had to be a better way, so we went back to the drawing board and reworked our development experience to ensure that “look-and-feel” creation and updates could be done by a HTML/JavaScript/CSS-developer without the need for code recompilations or that .NET/Commerce Server skill-set. We think this will greatly reduce both the up-front and ongoing investment required to build compelling Web experiences with Commerce Server.
We have implemented this in a couple of different ways to accommodate several development paradigms, which brings us to our next topic…
Support for the Latest Programming Models and CMS-agnostic
When Commerce Server was a Microsoft product, there was only one CMS, namely SharePoint. Now, we truly love SharePoint and it works very well for some customers, but it certainly does not meet the needs of the broader ecosystem by itself.
So we also went back to the drawing board to ensure that Commerce Server has a great story for:
- ASP.NET MVC
- ASP.NET Web Parts and Web Forms
- SharePoint 2010 (with SharePoint 2013 coming soon in the new year)
And, as such, we can now run standalone with just ASP.NET – or seamlessly integrate with the most popular content management systems many of our customers use such as Sitecore, Ektron, Umbraco, Kentico, Orchard, or any other MVC-, Web Form-, or Web-Part-compatible CMS.
Community-driven Out-of-Box Site Templates
One of the other historical challenges when Commerce Server was a Microsoft product. was that the out-of-box site templates and many other useful product attributes were tied to the main server release. Because the main server core is not something that is shipped all that often, nor something that you want to ship that often, innovation around the periphery of the product was stifled. And there was no way for the broader Commerce Server ecosystem to contribute.
We’ve remedied all of this with CS10. All of the new site templates are hosted on Codeplex so that we can rapidly update these outside of the server core. In addition, ANYONE can contribute (and we highly encourage this). And, the product team will still curate everything to assure quality. This sets us up for far more rapid innovation in the future.
We’ve also leveraged industry-standard tools and frameworks such as Enterprise Library, Twitter Bootstrap, and jQuery to provide best-in-class Responsive Design experiences, versus something that would have to be significantly re-factored to be used in the real-world.
For details, see. We hope to see you there contributing!
Enhancements for Developers and IT Professionals
We also wanted to make the lives of Developers easier. So we have added code generation for the Multi-Channel Foundation APIs. This will provide numerous productivity benefits, including:
- Simply less code to write
- Compile-time safety to prevent pesky, hard-to-debug errors
- Synchronicity between API calls and underlying metadata and schemas
We have also made this community source as well – you can get it at.
For the IT Professional, we have made Nuget packages to get up and running quickly. It’s fair to say you can now be up-and-running with a fully functioning site in 15 minutes – something that was never possible before. Get these at.
Likewise, we also resurrected all of the old IT Pro tools and have made these available on Codeplex as well. Get them at.
Enhancements for SharePoint Customers
We also wanted to address a number of the pain points as was evident from post-mortems on the SharePoint support in CS2009 and R2. We have re-implemented our SharePoint integration leveraging the new frameworks built for ASP.NET – and as a result there are number of key benefits to SharePoint customers including:
- 2 web parts instead of ~50
- A significantly reduced set of SharePoint Features requiring activation and management
- Far fewer scenarios where recompilation and redeployment is necessary
- 2-4 times greater performance throughput on the same hardware
A Streamlined Foundation for the Future
One of the biggest challenges in evolving Commerce Server is the sheer size of the code-base. We also had the task of removing all of the Microsoft branding from the product. So, there are a number of core changes, including:
- All references to the “Microsoft” brand have been removed and the .NET namespace hierarchies have been reorganized and consolidated to make more logical sense
- Obsolete and infrequently-used features have been removed:
- Data Warehouse
- Direct Mailer (though List Management for direct mail system integration remains)
- Solution Storefronts (replaced with the new Site Templates)
- Silverlight-based business tools (given lack of Silverlight roadmap)
- MUI Tool
- Best Practices Analyzer
- Security Configuration Wizard
- Site Migration Tool
- Health Monitoring
- Volume Shadow Copy Service Writer (simply backup file system, registry, and SQL databases directly)
The breaking API changes are documented in the CS10 Breaking Changes guide, downloadable from. Although we are sorry for the inconvenience, this should be a search-and-replace exercise for most and we do feel that the new namespace hierarchies will be considerably more logical.
Improved Setup and Upgrade Stories
Data migration is supported from Commerce Server 2000 onwards, so that part of an upgrade will be very straightforward.
In addition, we have changed the way we distribute the product. The setup experience has been re-written from scratch using industry-standard technologies. As such there are now three simplified installers:
- Server Core (which includes everything)
- Business User Tools
- BizTalk Adapters
In addition, we now support build-to-build upgrades. This will first and foremost allow us to distribute new capabilities significantly easier and more quickly than before. Secondly, it will allow us to distribute patches as upgrades – so the chore of having to track and manage the installation of dozens of hotfixes goes away. We’ll talk more about the broader supportability implications of this in a future post – but we think the new way of doing things will be significantly streamlined and less painful than how servicing worked in the past.
Support for the Latest Microsoft Platforms
CS10 is optimized for the latest and greatest including Windows 8 client, Windows Server 2012, SQL Server 2012 SP1, .NET Framework 4.5, and Visual Studio 2012. It supports Windows Server 2008 R2 SP1, SQL Server 2008 R2 SP2, .NET Framework 3.5, and Visual Studio 2012. BizTalk Server 2010 can be used, and BizTalk Server 2013 support will be available shortly after that product becomes available.
Closing Thoughts
We are very excited about Commerce Server 10. We think it is the most customer-friendly and best-performing version of Commerce Server, ever – and is a great next step on our plans to rapidly evolve Commerce Server into the .NET commerce platform of choice. You can also find this information contained in presentation form in our Roadmap, which is now publicly posted at. Hope this helps!
|
https://docs.microsoft.com/en-us/archive/blogs/commerce/a-detailed-look-at-whats-new-in-commerce-server-10
|
CC-MAIN-2020-40
|
refinedweb
| 1,299
| 50.06
|
Editorial Note: This Redux tutorial aims to help you with the how and why of Redux. Please note that you can write Redux apps with vanilla JavaScript or with JavaScript frameworks and libraries like Angular, React, Ember or jQuery.
The trend towards Single Page Applications (SPAs) has been increasing across responsive websites. On the whole, a SPA is just a web application that uses one HTML web page as an application shell and whose end-user interactions are implemented with JavaScript, HTML, and CSS.
This article is published from the DNC Magazine for Developers and Architects. Download this magazine from here [PDF] or Subscribe to this magazine for FREE and download all previous and current editions.
These days, it’s common to build SPAs using frameworks/libraries such as Angular or React.
With great power comes great complexity and while SPAs can help you build fast and fluid User Interfaces (UI), they can also introduce new problems that we haven’t dealt in with old style web applications.
Handling the data flow in SPA can be very hard and managing application states can be even harder. If you don’t handle data flow and states correctly, you can expect your application behavior to be unpredictable, inconsistent and untestable.
How do you tackle data flow and application state complexity?
There are many ways. For example, you can apply the Command Query Responsibility Segregation (CQRS) pattern, which isolates queries that read data from commands updating that data. Another option is the Event Sourcing pattern, which ensures that all changes in state are stored as a sequence of events.
In this article, we will explore the Redux pattern and how it can help tackle these SPA complexities.
In order to understand the Redux pattern, we should start with the Flux pattern. The Flux pattern was introduced by Facebook a few years ago.
Flux is a unidirectional data flow pattern that fits into the components architecture and blends with frameworks and libraries such as React and Angular 2.
Flux includes 4 main players: actions, dispatcher, stores and views. Figure 1 describes this pattern:
Figure 1: Flux Pattern
When a user interacts with a view, the view propagates an action to a central dispatcher. The dispatcher is responsible to propagate actions to one or many store objects.
Store objects hold the application data and business logic and are responsible to register to actions in the dispatcher. They also update the view which is affected by a specific action that indicates data/state has changed. The responsibility of the view is to update according to the new data/state, and to interact with users.
Another option to enter the data flow is an outside action which is not related to a view, such as timer callbacks or Ajax callbacks.
Now that we understand how the Flux pattern works, let’s drill down into Redux principles before we explore the Redux data flow and how it relates to Flux.
Redux is based upon 3 major principles:
1. Single source of truth
2. State is read-only
3. Changes are made with pure functions
Single source of truth
In Redux, there is only one store object. That store is responsible to hold all the application state in one object tree.
Having only one store object helps to simplify the debugging and profiling of the application because all the data is stored in one place. Also, difficult functionality such as redo/undo becomes simpler because the state is located in one place only.
The following example demonstrates how you get the state from a store object using the getState function:
let state = store.getState();
State is read-only
The only way to mutate the state that is held by the store, is to emit an action that describes what happened. State can’t be manipulated by any object and that guards us from coupling problems and from some other side effects. Actions are just plain objects that describe the type of an action, and the data to change.
For example, the following code block shows how to dispatch two actions:
store.dispatch({
type: 'ADD_GROCERY_ITEM',
item: { productName: 'Milk' }
});
store.dispatch({
type: 'REMOVE_GROCERY_ITEM',
index: 3
});
In the example, we dispatch an add grocery item action, and a remove grocery item action. The data in the action objects is the item to add or the index of the object to remove.
Changes are made with pure functions
In order to express how state transition occurs, you will use a reducer function. All reducer functions are pure functions. A pure function is a function that receives input and produces output without changing the inputs.
In Redux, a reducer will receive the previous state and an action, and will produce a new state without changing the previous state. For example, in the next code block you can see a groceryItemsReducer which is responsible to react to add grocery item action:
function groceryItemsReducer(state, action) {
switch (action.type) {
case 'ADD_GROCERY_ITEM':
return object.assign({}, state, {
groceryItems: [
action.item,
…state.groceryItems
]
};
default:
return state;
}
}
As you can see, if an action isn’t recognized by the reducer, you return the previous state. In case you got an add grocery item action, you return a copy of the previous state that includes the new item.
Now that we understand the main Redux principles, we can move on to the Redux data flow.
The Redux data flow is based on Flux but it’s different in a lot of ways.
In Redux there is no central dispatcher. Redux includes only one store object, while Flux allows the usage of multiple stores.
In order to mutate the stored Redux state, you use reducer functions instead of inner business logic functionality that exists in Flux stores.
All in all, Redux is a new pattern that took some aspects of Flux and implemented them differently.
The following figure show the Redux data flow:
Figure 2: Redux Data Flow
As you can see, the main players really resemble the players in Flux, except for the reducers and the lack of dispatcher object.
In the Redux data flow, a user interaction or an asynchronous callback will produce an action object. The action object will be created by a relevant action creator and be dispatched to the store. When the store receives an action, it will use a reducer to produce a new state. Then, the new state will be delivered to views and they will update accordingly. This data flow is much simpler then Flux and helps to produce a predictable state to the application.
The Redux library was created by Dan Abramov. On the whole, it is a small and compact library that includes a small set of API functions. In order to get started with the library, you can install it using npm:
npm install --save redux
or download it from its repository on GitHub:.
The Redux library includes the following main API functions:
Now that you know about the Redux library, let’s see it in action.
Note: This example assumes that you have node and npm installed on your machine. If you don’t have node and npm installed on your machine, go to the following link to download and install them:.
It is also assumed that you have some TypeScript knowledge.
Create a new empty project and give it the name ReduxInAction.
Run npm init and initialize a new package.json file. In the package.json, add the following main and scripts properties:
"main": "src/index.js",
"scripts": {
"test": "node src/index.js"
}
Once you finished editing the package.json file, run the following command in the command line:
npm install –save redux
This command will install the Redux library in your project. Add a new tsconfig.json file and add to it the following code:
{
"compilerOptions": {
"moduleResolution": "node",
"module": "commonjs",
"target": "es5",
"sourceMap": true
},
"exclude": [
"node_modules"
]
}
Add a new src folder under the root of the project. In the src folder, add two new folders: actions and reducers.
In the actions folder, create an index.ts file and add to it the following code:
import {Action} from "redux";
export default class CounterActions {
static INCREMENT: string = 'INCREMENT';
static DECREMENT: string = 'DECREMENT';
increment(): Action {
return {
type: CounterActions.INCREMENT
};
}
decrement(): Action {
return {
type: CounterActions.DECREMENT
};
}
}
The CounterActions class is an action creator class which also contains the action types that are included in our application.
In the reducers folder, create an index.ts file and add to it the following code:
import CounterActions from "../actions/index";
const INITIAL_STATE = 0;
export default (state = INITIAL_STATE, action) => {
switch (action.type) {
case CounterActions.INCREMENT:
return state + 1;
case CounterActions.DECREMENT:
return state - 1;
default:
return state;
}
}
The reducer has the implementation of how to mutate the current state and produce a new state. Once an increment action arrives, the state is incremented by 1. Once a decrement action arrives, the state is decremented by 1.
Now you can create the application shell and run it. In the src folder, add index.ts file and add the following code to it:
import {createStore} from 'redux';
import counterReducer from './reducers';
import CounterActions from "./actions/index";
const store = createStore(counterReducer);
const counterActions = new CounterActions();
console.log(store.getState()); // output 0 to the console
store.dispatch(counterActions.increment() as any);
console.log(store.getState()); // output 1 to the console
store.dispatch(counterActions.increment() as any);
console.log(store.getState()); // output 2 to the console
store.dispatch(counterActions.decrement() as any);
console.log(store.getState()); // output 1 to the
At first create a store using the createStore function and then give the store the counterReducer created earlier. Then create a new CounterActions instance. Then print to the console a set of operations to perform on the store.
Once the application is in place, run the TypeScript compiler to compile all the files in the project:
tsc -p
Then run the command npm run test to test the output that is written in the console.
This is a simple example of how to use Redux and it doesn’t include any UI. In a follow up article, I’ll cover how to combine Redux with React library.
Redux became a very popular data flow pattern and is being used in many applications.
The Redux library is now a part of the efforts made by Facebook to build open source libraries. During the last year, I had the opportunity to use the pattern in several projects and it really proved itself by helping to simplify very complicated features.
While Redux can help to simplify data flow in big SPAs, it isn’t suitable to every application and in simple or small applications, it can produce a lot of unnecessary coding overhead.
I encourage you to deep dive into Redux and there is a free course that was recorded by Redux library creator Dan Abramov which delves into features that weren’t covered in this article:.
Download the entire source code of this article (Github)!
|
https://www.dotnetcurry.com/reactjs/1356/redux-pattern-tutorial
|
CC-MAIN-2022-27
|
refinedweb
| 1,822
| 55.84
|
Hello to all!
This is my first and MOST defininately NOT the LAST post/question.
I am a novice doing DVD production capturing video footage of our Sunday Services. I have been using ULEAD DVD Movie Factory for some time now. I really like it. The problem that I am having is "massaging" the audio for that ambient sound.
I feed the signal to the computer via a DA-7 board using the record outs. It is then processed into a Athlon P4 3000+, 2Ghz 1MB Ram chip. The sound card is an ATI Rage theater video card. The signal is connected using S-Video.
The Sound card is a SB Audigy S2 card wit surround sound capabilities.
What I want to do (if "approved" by the members of the forum), use the Vegas software to seperate the audio and video so that I can "master" and produce a "softer" sound. This is just the beginning of my quetions. how do I convert the .dwz files into a format that Vegas will reconize or how do I get it to the .veg format?
Thank you in advance
Charles B aka ulremember
You need a CAD Converter.
BLUE SKY, BLACK DEATH!!
I know you are not tallking about Computer Aided Drafting!
The file extension used on ULEAD is a .dwz which I did google and found that is what it means it that app. If it should not or could not be that, how do I change it?
TIA
For whatever it's worth, I've been capturing video to my PC for about 6 years now and I have no idea at all what a "CAD Converter" is.
I don't know the answers to your questions. Sorry.
Here is a good one....Acme CADConverter is DWG, DXF and files.
Do you mean a Panasonic Ramsa WR-DA7 Mixer Board?
How many channels are you outputting? One stereo pair?
Is your goal to mix full 5.1 or stereo sound?
What are your video sources and how are you capturing?
DWZ is a propietary project file for MF in your case, you can disregard the information about CAD. You can't convert them to anything eles, and if you could they would be pretty much usleless anyway. Project files are used in just about every video application for saving the work you have done, they do not contain any assets like audio or video etc. The assets can be found wherever you have specified the application to save them to during capture.
Simply open the video file you have captured in Vegas, after your audio work is complete then import into MF for authoring.
Nepadigital Video Articles
DWZ files are project files not video files. You can open DWZ project files
with DVD Movie Factory but you can't "convert" then to a video file to
import into another program.
You need to think of these things as RECIPES.
DWZ and VEG, etc. are all just different formats/styles/versions of recipes (based on what kind of kitchen equipment you've got).
These aren't the actual media assets (the cake).
Somebody gives you a recipe, they aren't giving you the cake; and vice-versa.
In the old days, the only way to CONVERT from one recipe style to another was via CMX EDL (text list file for on-line linear editor).
Nowadays, it's getting much more communal, what with OMF, AAF and now MXF. Even MP4 (and subsequent), with it's asset/metafile flexmux and script capabilities could do the trick.
Problem STILL is, you may have your 1st app where you're doing something like a PIP (picture-in-picture), but you convert it to the 2nd app style, where the 2nd app doesn' even understand WHAT a PIP is...
So users who dare to use multiple systems end up oftentimes totally recreating the recipe, FROM SCRATCH, on the new system.
But getting back to the OQ,
What are you trying to achieve?
Do you want less overall ambience? Do you want more diffuse global ambience? Less high-end in the ambience? Wanting to add in matching artificial ambience? Want to do corrective EQ?
Most of these kinds of things are best done on the front end by correct mike choices, correct mike placement, and correct acoustical treatment.
(I do know about this--I've been doing choral, organ, etc. recording for over 15 years).
So you've got a "surround-capable" sound card. Do you also have a surround mike setup? Do you have a capable (and discreet) multichannel mix output? How accurate is your end-to-end monitoring (E/E vs. Encode/Decode, etc)?
Are you relying on an internal sound card to do the A->D (this often adds digital noise via the MOBO RFI/EMI)?
More info please...
Scott
"Every closed eye is not sleeping, and every open eye is not seeing." - Bill Cosby
What I am trying to accomplish is capturing the live services at our church and producing DVDs. We are using three analog cameras that my dad operates as the video man. I am the FOH (if I can use that term.) We do have a Panasonic WR-DA7 board with all of the 16 channels used. I will not get into the specifics as of yet on what we are miking and how.
I am feeding the video signal to my capture card ( ATI Rage Theater Video) via the S-Video input. I get the audio feed from the DA-7 via the stereo input on the SB Surround card ( I forgot the model). I get a very good quality of video and audio that are in sync. My next step is to try and "massage" the audio so that I get a better, softer, ambient sound. I have found that I cannot equalize the audio nor add effects using Ulead Movie. I do notice that Vegas has that ability. Oh yeah, probably another topic is this, while I was trying to capture audio and video with Vegas, which I could not do simutainously, I screwed up and clicked the "Closed Captioning" button in the options menu. Now I cannot turn it off, and the capture portion cannot open my ATI card for capture. It did prior to me clicking on Closed Captioning.
Now, back to the description. The sound that I have is raw and rough. We have very good singers and it is not the problem of trying to make them sound better, just a lil softer with more ambience, for lack of better words !
After the service is recorded, I edit out most of the clips so that I can put two choir songs and the message on a 4.7 DVD. I have not tried to change the quality settings as of yet so that I can get more on the DVD.
Now, is there a way that I can use Vegas to capture the audio and video at the same time?
Does anyone know how to turn off the "closed Captioning"? I have tried uninstalling and reinstalling. By the way, Ulead works fine when it comes to capturing
If I am going about this all wrong, I can take the criticism.
Thank you in advance
Chuck aka ulremember
So to clarify, the analog cameras go through some kind of switcher/mixer?
You don't want to cap cameras separately for post editing?
Why can't you use the audio board to EQ the individual Mics and subgroups?
It stands to reason you would want a separate EQ for choir, preacher and instruments.
Is there some kind of PA equalization issue that prevents doing it at the mixer?
You are only desiring a 2 channel stereo mix, not 5.1 surround?
Have you considered capturing audio and video together to DV format using a device like the Canopus 110? This will also give Vegas a timeline monitor preview and accurate audio monitoring.
Is there a budget?
Not to get into specifics, one way to soften the sound is to mic the audience. You can set up a mic at the back of the auditorium and mix it into your dry soundboard audio output.
Not sure how sophisticated your board is, but hopefully you can run the audience mic into the board and mix a separate output for the PC (verses what you send out to the PA).
Darryl
The Panasonic Ramsa WR-DA7 is a fairly full featured board. It should be capable of separate PA and recording EQ.
Yes we do have a video mixer, I forgot the type, but it is a good one. I do have some equalization coming through the mixing board. But the end sound is not "nice" and CD quality. Oh yeah, I am using a sampling rate of 48,000 hz. i am not tring to achive surround sound yet. I have the rec outs from the DA-7 going to the L/R inputs of the audio card.
I guess I am trying to achieve that CD quality sound- reverb, eq ect.
As I was saying, I think that Vegas can do that . I imported a recorded performance into it and did some nice changes. But when I tried to do the video and audio caturing with it, I ould only get video. Then I screwed up and turned on Closed Captioning, and now I cannot capture video at all.
My brother told me to try and save the ULEAD capture as an .AVI, but I do not have that option.
There are some things you can do to reduce the amount of noise filtering that you need to do. It is very important to place the microphones close to the speaker or singer(s). Also, make sure that only the microphones that are actually in use are on. If you use good microphone placement and mixer practices, you shouldn't need to do much filtering. The other thing is make sure that you are using good microphones. You don't want to try to "fix" audio problems that are caused by poor microphones. In addition, make sure that you aren't over driving the audio level. It's difficult to know what you are describing without hearing it but the clipping caused by having your levels too high can also cause a "raw and rough" sound. Basically, you need to use proper placement of good microphones and correct mixer operation to deal with this.
Originally Posted by ulremember
Y
My brother told me to try and save the ULEAD capture as an .AVI, but I do not have that option.
That won't accomplish anything. I don't understand, you have a captured file correct? What format is it in now? You only need to extract the audio and dump it into a audio editing program then use that for the soundtrack. I'm not referring to the DWZ, as I mentioned this does not contain any audio or video.
Quick comment, more later.
If the audio coming from the mixer is not CD quality the problem needs correction in the audio board settings. Garbage in can't be improved. If you monitor the input feed and find it high quality the problem is with your Audigy settings.
ATI+Audigiy are not a good capture setup for Vegas. It can be made to work but will be less desirable than getting a "flat" recording from the mixer. Make audio adjustments in the audio mixer first where you have control of each mic equalization. Fine touches can then be made in Vegas.
I suspect some of your problems are due to poor monitoring at the Vegas computer.
Are you able to spend some money or are you stuck with just the ATI and Audigy cards?
Without you having gone in-depth into the mike choices and their placement and their balance, it's very hard to tell what's going on there. But I'll make a stab at it...
So you're FOH--obviously a re-inforcement man. Using ALL 16 inputs? Tells me you've probably either got lots of Iso Lav mikes for multiple speakers or you've got a ~rock~ band setup (or both). And the bulk of what you're doing at the board is to keep the balance right and clean and to keep the feedback down while maintaining optimum SPL.
THIS IS SOMEWHAT AT ODDS WITH HOW YOU WANT A SIGNAL FOR RECORDING.
Sure for spoken word you want quite dry and close miked for clarity. But for "smoothness" of music you want somewhat more distant mike placement. And here's a big one:
OFTEN BEST MIKES FOR PA ARE VERY BAD FOR RECORDING (and vice-versa).
Without the opportunity for separate mikes altogether, you'd certainly want a separate recording submix.
And this is the kind of stuff that should NOT have to be EQ'd or "processed" after-the fact.
Maybe if you had a completely different mike chain with 2 or 3 cardioid mikes ~overhead front & center, and then 2 omni "hall" mikes for ambience, with a "mini"-mixer just for their balance (would also have more complete electrical isolation from PA channels--no crosstalk). This, however, depends alot upon what style of music/worship/acoustics you have--some work together well, some don't.
As an example of some choral stuff that I've done, see my CSJD work on my DEMOs page at my website:. This is 2-track live ORTF-style on telescoping stand in "gothic" style church with choir of ~25. Barely any EQ at all.
Is this the "smooth" that you're talking about? (Words like that are so subjective--maybe you could give some more concrete descriptions).
"RAW" is usually an indication of mike choice/placement mismatch, EMI/RFI interference, lower quality mikes, WAY TOO CLOSE miking (especially on unseasoned talent), or weird acoustics, even more than "EQ" choices.
More info, please...
Scott
I want to thank everyone that has taken the time to assist me in my adventure.
I really appreciate the time and advice.
Here are the mike that we use as for now. We just purchased four Shure SM87 Beta A's. Out lectrun/pulpit mike is a Senshieser I forgot the model number, but it runs $400.00. Our three overhead phantomed power choir mikes are Audiotechnicas, about 8 years old, I forgot those models two. I have direct ins for the Keys, and piano. I am miking the bassist with a Shure 57. I am trying to get rid of his amplifier (on going struggle). I mike my brother the percussionist with a Shure SM 57 in between the congos and bongos. the drummer is miked over head with a Shure 57, and the kick drum with an OLD Nady mike. Will get gone real soon. Will probably use a Shure 57.
Cornucopia, yes that is the sound I am looking for- "That hall sound". We have very poor acoustics at our church. Flat walls with no covering ( I am pushing for that at a later time. Like curtains or something). We are a Pentecostal Church with music that can get pretty loud and "involved") But that clarity and ambience is beautiful.
I have on you tube a video clip that was done a while ago. you can probably still find it by searching ulremember. Also one of four 1/4 mile race cars that I wrench on. Oh yeah, you will hear the bass player over everyone, he had ans has the problem of adjusting his level when he "feels good" which messes up the mix. Also, the audio is hot and goes up and down at times because I have a hard time monitoring the levels at times. I am in a "booth" at the top of the sanctuary. you might also want to check out the "True Identity" clip. that is my brother Decond Craig Butler to check out the spoken word audio .
Try this link:
I have gotten better results as of late. just have not uploaded yet. Will do so when I bring the ol' puter home again.
Cornucopia, is that an unedited or unmassaged recording you posted?
Well gotta go, and again everyone thank you
Chcuk aka ulremember
re: clips on Demo page,
Those CSJD ones were recorded live 2track using 2 Shure KSM-44(?IIRC?)s in ORTF positions about 18' back and 12' up, sent to Mackie Mixer, then on to Alesis Masterlink (love that device!) @ 24bit/48kHz. Editing was only clean top & tails (w/FadeInsOuts), nothing in the program itself (that's how my choir sounds). Slight Limiting applied, then Dithered Normalize, then Bitdepth reduction to 16bit & SRC to 44.1kHz for CDs. That's it. You can SEE a similar setup in a video further down the page in the [Stereo3D] section.
re: your setup,
SM57's and SM87s are great for OldRockInstrument stuff and Stage stuff (respectively), but I'd never use them for recording unless: (1) I had no other budget and were stuck with them, or (2) I were looking to specifically get that kind of sound.
Really not trying to diss you, but I've NEVER found an AT mike that I didn't HATE, and I've used a lot of them (sometimes stuck with them as main mikes, too!). Sennheisers can be so-so, good, or great depending upon the model and their intended use.
Those mikes speak of PA use, not recording use. It's not unheard of to get $2000 mikes (each, not a pair) for good recording sound--they're on a completely different level (they're also alot more fragile).
Ok, so let's say you're ~stuck with the mikes you've already got.
One possible thing to do is:
1. Record a clip as nearly as DRY as possible (very close miking, with more acoustical treatments in hall--not just curtains--to get a more ANECHOIC sound. Kinda like if done raw multitrack in the studio with NO processing)
2. Do a 2ch. clean, dry mixdown that has all the balance right. Monitor the mixdown via a combination of headphones and near-field monitors in a dry environment (not at FOH, but in a studio edit room). Maybe a little bit of EQ here.
3. Get an audio app that can do "Reverb Impulse Convolution" and learn how to apply that to your clean mixdown. Save the finished render and --VOILA!.
This is roughly the equivalent of taking a dry recording (along with a great HiFi playback system) to an empty, famous, acoustically beautiful music hall and "Playing back" the dry recording through the speakers, while simultaneously recording THAT with ANOTHER system that's set up in the hall.
(And yes, I have actually done this and it works great also)
IIRC, Sony's SoundForge can do this, as well as Plogue Bidou (sp?) and others.
Good luck, and let us know any more we can do...
Scott
Checked out the Youtube clip. Sound isn't bad (although it's strangely quieter at the beginning).
Cool tune.
This kind of sound SHOULD be a little dryer than my example (its more rhythmically lively, so you don't want ambience confusing the clarity of the impulses).
Those ATs are so close that, while they are helping to isolate the choir from the instruments and hall, they're emphasizing individual voices unevenly. Better mikes moved further away, but ones which are MORE directional, might do the trick. (Or go the other extreme and get an individual mike for EACH singer and multitrack it, but that's $$$)
Scott
|
http://forum.videohelp.com/threads/278479-Live-Church-Video-Capturing
|
crawl-003
|
refinedweb
| 3,282
| 73.88
|
etc) -->
Welcome to this, my first article in C#, and the first in a series on image
processing. I figure between Nish and Chris Losinger waiting to bust my
chops, I should learn as much as anyone from this article.
The purpose of the series will be to build a class that allows any C# programmer
access to common, and not so common, image processing functionality. The
reason we are doing it in C# is simply that I want to learn it, but the
functionality we use is available through GDI+ in C++, and indeed the code to
do the same thing using a DIBSECTION is not terribly
different. This first article will focus on per pixel filters, in other
words, filters that apply the same algorithm to each pixel 'in place' with no
regard for the values in any other pixels. You will see as we progress
that the code becomes somewhat more complex when we start moving pixels or
changing values based on calculations that take into account surrounding pixel
values.
DIBSECTION
The app we will use is a basic Windows Forms application ( it is in fact my
first ). I've included code to load and save images using GDI+, and a
menu to which I will add filters. The filters are all static functions in
a class called BitmapFilter, so that an image can be passed in ( C# passes
complex types in by reference ) and a bool returned to indicate success or
failure. As the series progresses I am sure the app will get some other
nice functionality, such as scaling and warping, but that will probably happen
as the focus of an article after the core functionality is in place.
Scrolling is achieved in the standard manner, the Paint method uses the AutoScrollPosition
property to find out our scroll position, which is set by using the AutoScrollMinSize
property. Zooming is achieved through a double, which we set
whenever we change the scale, and which is used to set the AutoScrollMinSize
anew, as well as to scale the Rectangle we pass into DrawImage
in the Paint method.
BitmapFilter
bool
Paint
AutoScrollPosition
AutoScrollMinSize
double
Rectangle
DrawImage
My first real disappointment in building this code was to find that the BitmapData
class in GDI+ does not allow us to access the data it stores, except through a
pointer. This means we need to use the unsafe keyword to
scope the block of code which accesses the data. The net effect of this
is that a highly security level is required for our code to execute, i.e. any
code using the BitmapData class is not likely to be run from a
remote client. This is not an issue for us right now, though, and it is
our only viable option, as GetPixel/SetPixel is
simply too slow for us to use iterating through bitmaps of any real size.
BitmapData
unsafe
GetPixel
SetPixel
The other down side is that this class is meant to be portable, but anyone using
it will need to change their project settings to support compiling of unsafe
code.
A quirk I noticed from the first beta of GDI+ continues to this day, namely
requesting a 24bitRGB image will return a 24bitBGR image. BGR ( that is,
pixels are stored as blue, green, red values ) is the way Windows stored things
internally, but I'm sure more than a few people will get a surprise when they
first use this function and realise they are not getting what they asked for.
Here, then is our first, and most simple filter - it simply inverts a bitmap,
meaning that we subtract each pixel value from 255.
public static bool Invert(Bitmap b)
{
// GDI+ still lies to us - the return format is BGR, NOT RGB.
BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
int stride = bmData.Stride;
System.IntPtr Scan0 = bmData.Scan0;
unsafe
{
byte * p = (byte *)(void *)Scan0;
int nOffset = stride - b.Width*3;
int nWidth = b.Width * 3;
for(int y=0;y < b.Height;++y)
{
for(int x=0; x < nWidth; ++x )
{
p[0] = (byte)(255-p[0]);
++p;
}
p += nOffset;
}
}
b.UnlockBits(bmData);
return true;
}
This example is so simple that it doesn't even matter that the pixels are out of
order. The stride member tells us how wide a single line is, and the
Scan0 member is the pointer to the data. Within our unsafe block we grab
the pointer, and calculate our offset. All bitmaps are word aligned, and
so there can be a difference between the size of a row and the number of pixels
in it. This padding must be skipped, if we try and access it we will not
simply fail, we will crash. We therefore calculate the offset we need to
jump at the end of each row and store it as nOffset.
stride
Scan0
nOffset
The key thing when image processing is to do as much outside the loop as
possible. An image of 1024x768 will contain 786432 individual pixels, a
lot of extra overhead if we add a function call, or create a variable inside
the loops. In this case, our x loop steps through Width*3 iterations,
when we care about each individual color, we will step the width only, and
increment our pointer by 3 for each pixel.
x loop
That should leave the rest of the code pretty straightforward. We are
stepping through each pixel, and reversing it, as you can see here:
Subsequent examples will show less and less of the code, as you become more
familiar with what the boilerplate part of it does. The next, obvious
filter is a grayscale filter. You might think that this would involve
simply summing the three color values and dividing by three, but this does not
take into effect the degree to which our eyes are sensitive to different
colors. The correct balance is used in the following code:
unsafe
{
byte * p = (byte *)(void *)Scan0;
int nOffset = stride - b.Width*3;
byte red, green, blue;
for(int y=0;y < b.Height;++y)
{
for(int x=0; x < b.Width; ++x )
{
blue = p[0];
green = p[1];
red = p[2];
p[0] = p[1] = p[2] = (byte)(.299 * red
+ .587 * green
+ .114 * blue);
p += 3;
}
p += nOffset;
}
}
As you can see, we are now iterating through the row b.Width times, and stepping
through the pointer in increments of 3, extracting the red, green and blue
values individually. Recall that we are pulling out bgr values, not
rgb. Then we apply our formula to turn them into the grey value, which
obvious is the same for red, green and blue. The end result looks like
this:
b.Width
It's worthwhile observing before we continue that the Invert filter is the only
non-destructive filter we will look at. That is to say, the grayscale
filter obviously discards information, so that the original bitmap
cannot be reconstructed from the data that remains. The same is also true
as we move into filters which take parameters. Doing a Brightness filter
of 100, and then of -100 will not result in the original image - we will lose
contrast. The reason for that is that the values are clamped - the
Brightness filter adds a value to each pixel, and if we go over 255 or below 0
the value is adjusted accordingly and so the difference between pixels that
have been moved to a boundary is discarded.
Having said that, the actual filter is pretty simple, based on what we already
know:
for(int y=0;y<b.Height;++y)
{
for (int x = 0; x < nWidth; ++x)
{
nVal = (int) (p[0] + nBrightness);
if (nVal < 0) nVal = 0;
if (nVal > 255) nVal = 255;
p[0] = (byte)nVal;
++p;
}
p += nOffset;
}
The two examples below use the values of 50 and -50 respectively, both on the
original image
The operation of contrast is certainly the most complex we have attempted.
Instead of just moving all the pixels in the same direction, we must either
increase or decrease the difference between groups of pixels. We accept
values between -100 and 100, but we turn these into a double between
the values of 0 and 4.
if (nContrast < -100) return false;
if (nContrast > 100) return false;
double pixel = 0, contrast = (100.0+nContrast)/100.0;
contrast *= contrast;
My policy has been to return false when invalid values are passed in, rather
than clamp them, because they may be the result of a typo, and therefore
clamping may not represent what is wanted, and also so users can find out what
values are valid, and thus have a realistic expectation of what result a given
value might give.
Our loop treats each color in the one iteration, although it's not necessary in
this case to do it that way.
red = p[2];
pixel = red/255.0;
pixel -= 0.5;
pixel *= contrast;
pixel += 0.5;
pixel *= 255;
if (pixel < 0) pixel = 0;
if (pixel > 255) pixel = 255;
p[2] = (byte) pixel;
We turn the pixel into a value between 0 and 1, and subtract .5. The net
result is a negative value for pixels that should be darkened, and positive for
values we want to lighten. We multiply this value by our contrast value,
then reverse the process. Finally we clamp the result to make sure it is
a valid color value. The following images use contrast values of 30 and
-30 respectively.
First of all, an explanation of this filter. The following explanation of
gamma was found on the web: In the early days of television it was discovered
that CRT's do not produce a light intensity that is proportional to the input
voltage. Instead, the intensity produced by a CRT is proportional to the input
voltage raised to the power gamma. The value of gamma
varies depending on the CRT, but is usually close to 2.5. The gamma response of
a CRT is caused by electrostatic effects in the electron gun. In other
words, the blue on my screen might well not be the same as the blue on your
screen. A gamma filter attempts to correct for this. It does this
by building a gamma ramp, an array of 256 values for red, green and blue based
on the gamma value passed in (between .2 and 5). The array is built like
this:
byte [] redGamma = new byte [256];
byte [] greenGamma = new byte [256];
byte [] blueGamma = new byte [256];
for (int i = 0; i < 256; ++i)
{
redGamma[i] = (byte)Math.Min(255, (int)(( 255.0
* Math.Pow(i/255.0, 1.0/red)) + 0.5));
greenGamma[i] = (byte)Math.Min(255, (int)(( 255.0
* Math.Pow(i/255.0, 1.0/green)) + 0.5));
blueGamma[i] = (byte)Math.Min(255, (int)(( 255.0
* Math.Pow(i/255.0, 1.0/blue)) + 0.5));
}
You'll note at this point in development I found the Math class.
Math
Having built this ramp, we step through our image, and set our values to the
values stored at the indices in the array. For example, if a red value is
5, it will be set to redGamma[5]. The code to perform this operation is
self evident, I'll jump right to the examples. I've used Gamma values of
.6 and 3 for the two examples, with the original as always first for
comparison. I used the same values for red, green and blue, but the
filter allows them to differ.
redGamma[5]
Our last filter is a color filter. It is very simple - it just adds
or subracts a value to each color. The most useful thing to do with this
filter is to set two colors to -255 in order to strip them and see one color
component of an image. I imagine by now you'd know exactly what that code
will look like, so I'll give you the red, green and blue components of my son
to finish with. I hope you found this article informative, the next will
cover convolution filters, such as edge detection, smoothing, sharpening,
simple embossing, etc. See you then !!!
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
public static Color ColorBlend(Color c1, Color c2, Func<int, int, int> function)<br />
{<br />
return Color.FromArgb(<br />
function(c1.A, c2.A),<br />
function(c1.R, c2.R),<br />
function(c1.G, c2.G),<br />
function(c1.B, c2.B)<br />
);<br />
}<br />
<br />
public static int Multiply(int b, int l)<br />
{<br />
return ((b * l) / 255);<br />
}<br />
<br />
public static Color ColorBlend(int value ,Color clr)<br />
{<br />
return Color.FromArgb(<br />
function(value, clr.A),<br />
function(value, clr.R),<br />
function(value, clr.G),<br />
function(value, clr.B)<br />
);<br />
}<br />
p
using System.Runtime.InteropServices;
public static bool Grayscale(Bitmap b)
{
// the return format is actually BGR not RGB
BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
int stride = bmData.Stride;
IntPtr Scan0 = bmData.Scan0;
int numBytes = (b.Width * 3) * b.Height;
byte[] rgbValues = new byte[numBytes];
Marshal.Copy(Scan0, rgbValues, 0, numBytes);
byte red, green, blue;
for (int counter = 0; counter < rgbValues.Length; counter += 3)
{
blue = rgbValues[counter];
green = rgbValues[counter + 1];
red = rgbValues[counter + 2];
rgbValues[counter] = rgbValues[counter + 1] = rgbValues[counter + 2] = (byte)(.299 * red + .587 * green + .114 * blue);
}
Marshal.Copy(rgbValues, 0, Scan0, numBytes);
b.UnlockBits(bmData);
return true;
}
[System.Runtime.InteropServices.DllImport("GDI32.dll")]
public static extern bool DeleteObject(IntPtr objectHandle);
DeleteObject(Scan0);
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/Articles/1989/Image-Processing-for-Dummies-with-C-and-GDI-Part-1?fid=3488&df=90&mpp=25&prof=True&sort=Position&view=Normal&spc=Relaxed&fr=51
|
CC-MAIN-2019-13
|
refinedweb
| 2,326
| 62.27
|
Concurrency demos/Simple producer and consumer
From HaskellWiki
A simple example of a producer thread, performing IO events, and passing them over a Chan, to a consumer thread, which processes the events as they arrive (in a pure manner).
The interesting point to note is that Chans abstract over unsafeInterleaveIO, and allow for pure processing of streams of external events.
import Data.Char import Control.Concurrent import Control.Concurrent.Chan main = do c <- newChan cs <- getChanContents c -- a lazy stream of events from eventReader forkIO (producer c) -- char producer consumer cs where -- thread one: the event producer producer c = forever $ do key <- getChar writeChan c key -- thread two: the lazy consumer consumer = mapM_ print . map shift where shift c | isAlpha c = chr (ord c + 1) | otherwise = c forever a = a >> forever a
And running it:
$ ghc -O -threaded A.hs -o a $ ./a a 'b' '\n' b 'c' '\n' d 'e' '\n'
|
https://wiki.haskell.org/Concurrency_demos/Simple_producer_and_consumer
|
CC-MAIN-2016-22
|
refinedweb
| 152
| 61.36
|
Money Borrowing, Gold Smuggling and Diamond Mining: An Englishman in Pombaline Circles
Tijl Vanneste1
Abstract.
Keywords
Economic History, Contraband Trade, Diamonds, Anglo-Portuguese Relations, Pombal
Economic History, Contraband Trade, Diamonds, Anglo-Portuguese Relations, Pombal
Resumo.
Palavras-chave
História Económica, Contrabando, Diamantes, Relações Anglo-Portuguesas, Pombal
História Económica, Contrabando, Diamantes, Relações Anglo-Portuguesas, Pombal
The Anglo-Portuguese Relationship
Any history of Portugal’s economy during the eighteenth century is partly a history of its relationship with England. Historians have written at length about the nature of this relationship (Noya Pinto, 1979; Fisher, 1971; Sideri, 1970; Macedo, 1963). Essentially, the Spanish threat forced Portugal to obtain a long-term military and political ally, while the English were perfectly able to use Luso-Brazilian markets to export their manufactured goods to. The imports of Brazilian gold and diamonds further served to widen the trade imbalance in the long run (Maxwell, 1968: 612; Macedo, 1963: 46-47). Whether the Anglo-Portuguese relationship established in the treaties of 1642, 1654, 1661 and 1703 (the Methuen Treaty) was beneficial (Fisher, 1963: 220) or detrimental (Sideri, 1970: 51-54) to Portugal’s economy is a question that remains unresolved (see also the essays in Cardoso et al.,2003), and contemporary voices already expressed their doubts about the nature of the economic relationship with England. The privileges that the English were granted, for instance, in their trade with Brazil, which had already been arranged on a monopolistic basis since the foundation of the Companhia Geral do Comércio do Brasil in 1649, were subject to scrutiny, and the country’s dependence on England was discussed by contemporary economic thinkers and government officials.
Eighteenth-century economic reforms came to be strongly identified with the rise to power of Sebastião José de Carvalho e Melo, the future Marquis of Pombal. Portuguese historians still ponder the relationship between Pombal’s actions and the wider historical context of the time (Hespanha, 2007; Monteiro, 2006; Subtil, 2007), but there is no doubt that Pombal’s economic policies, while not entirely new, proved to be an important turning point in Portuguese economic history (Cardoso, 2005; Maxwell, 1995; Macedo, 1951). As José Subtil pointed out in his monograph on Pombal, Lisbon’s 1755 earthquake was also a political earthquake (Subtil, 2007) and momentum was created to adopt economic policies that would lead to a stronger, and more nationalized, Portuguese economy (Pereira, 2009: 488-496). This enabled Pombal to maneuver himself closer to the king, at the center of political power (Monteiro, 2006: 81). From such a position, the new prime minister, who already had experience of foreign policy after diplomatic stints in London and Vienna and his appointment as minister of foreign affairs in the early 1750s, was able to implement an economic and political strategy intended to strengthen Portugal’s national economy, particularly in its relations with England.
Because of this, Pombal’s policy has often been labeled as “mercantilist”, but it went further than that (Macedo, 1966). Mercantilist techniques, such as the foundation of monopoly companies, were applied in order to create a class of Portuguese merchants who could compete with foreign traders (Maxwell, 1995: 67). This did not mean, however, that Pombal sought to destroy the privileged relationship with the English; he merely wished it to be more equivocal (Maxwell, 1995: 66). He also wanted to curtail English contraband trade in Lisbon, as the outflow of gold to England was immense, which was considered to be detrimental to the Portuguese economy (Sousa, 2006). In order to fully apprehend Pombal’s economic policy, it is crucial to understand its relationship with the previously established historical context of Portuguese dependence on English imports, traders, and shippers. Portuguese historians have rightly pointed out that it was not just Pombal himself who mattered, but that one also has to take into account the wider context (Cardoso, 2013; d’Azevedo, 1922). José Subtil argued that the agents of change after 1755 not only included Pombal, but also a network of “political accomplices” (Subtil, 2007).
One could, of course, expand this notion to include economic accomplices, and this article will be concerned with one such accomplice, remarkably enough an Englishman, whose relationship with Pombal dated from the time of the latter’s stay in London as a Portuguese envoy. John Bristow was involved in trade with Portugal at that time, and would later move to Lisbon, where he became a well-known member of the English trade factory. This organization was founded in 1711 by 59 firms, which were mainly active in textiles, grain and the wine business, as well as general shipping and finance. Their business extended to the Portuguese colonies, Newfoundland, the Mediterranean and the home territory (Fisher, 1981: 23-44). The English factory in Lisbon was a legal entity, with the English Consul-General serving as its presiding officer. Disputes between English and Portuguese merchants were settled by a special judge. The trade factory had its own chapel and burial ground (Lodge, 1933: 225).2 Membership of the factory was crucial for English traders who were active in Lisbon, and was not free of charge. An annual tax had to be paid, the amount of which depended on the total volume of trade imports made by a particular merchant (Walford, 1940: 33-36).
Many of these merchants acted on their own behalf, but they also worked as agents for firms in London. It was the Lisbon-based firm of Bristow, Warde & C° that acted as an agent for the initially London-based trader John Bristow (Sutherland, 1984a: 372). Bristow was born in 1701, became a Member of Parliament in 1734, and acted as a director and governor of the South Sea Company between 1730 and 1761.3 Bristow was a man of a certain financial standing, as shown by his subscription of £150,000 in an English government loan in 1744 (Shaw, 1998: 89; Dickson, 1993: 289), but he lost his fortune in the aftermath of the 1755 Lisbon earthquake. In 1749, he was one of the partners in a company that also included Samson Gideon, a wealthy Jewish-turned-Christian financer, and Francis Salvador, a highly successful Anglo-Jewish diamond trader, and who was commissioned to deliver three million ounces of silver to the East India Company, to be used in the purchase of Indian diamonds.4 From the 1750s onwards, he resided in the Portuguese capital, and he died there in 1768, leaving a lot of financial debts behind.5 Before his fall, it is important to consider his rise within an Anglo-Portuguese context, a rise that had begun at a time when Pombal was active in politics, but not yet the powerful figure he was to become after 1755.
Brazilian Contraband from Lisbon to London
Gold was a central issue in the Anglo-Portuguese relationship. In the late seventeenth century, the discovery of gold deposits in the interior heartland of Brazil led to a true gold rush (Higgins. 1999: 17-42; Antonil 1967: 119-123). While this served to revitalize a Portuguese economy that was already in a state of crisis (Boxer, 1969: 456) and to balance the trade deficit (Godinho, 2005: 326), it also “provided England with a new and essential source of bullion” (Sideri, 1970: 49). In a good year, such as 1725, 20,000 kilograms of Brazilian gold reached Portugal (Godinho, 1953: 83), and Charles Boxer has suggested that between half and three quarters of that amount had found its way to England (Boxer, 1962: 157). The exact volume of Brazilian gold that made it to Portugal and England during the eighteenth century is not known, partly because of the difficulty in estimating contraband, and research continues to be undertaken to determine both the quantitative and qualitative ramifications of gold movements (Costa, Rocha and Sousa, 2010; Sousa 2000). Historians have long agreed on the imbalance that Brazilian gold helped to create or deepen in the Anglo-Portuguese relationship. It provided a boost to the English economy, by contributing to the monetization of England, or even, in the long run, to its Industrial Revolution, as pointed out by Adam Smith. Brazilian gold also proved to be an impediment to Portuguese economic development. John Richards, for instance, pointed out that the Brazilian gold rush contributed to the migration of 400,000 Portuguese to Brazil, one fifth of the total population of 1700, which was detrimental to Portuguese industry (Richards, 2003: 405). The growing dependence on English imports also prevented the development of a proper Portuguese industry (Sideri, 1970: 40-83), and no nation liked to see its bullion leave, whether it was gold or silver. In England, for instance, it was forbidden to export domestic bullion.6
English merchants became heavily involved in smuggling gold and bullion out of Portugal. Magalhães Godinho wrote that “the vessels of the King of England generally arrived in Lisbon at the same time as the fleets from Brazil, of course completely by chance. […] During the night, a good part of the undeclared gold was unloaded onto the English vessels” (Godinho, 2005: 330). This practice led Malachy Postlethwayt to remark that the discontent of the Portuguese government with regard to clandestine bullion exports should not be exclusively blamed upon England. In his opinion, other countries benefited as much from Brazilian gold, because the English were transporters, taking the gold to different nations (Postlethwayt, 1774; Pedreira, 2003 and 1994). The Portuguese were well aware of the central role played by English ships in gold smuggling, which led to various clashes on the waterfront between Portuguese soldiers and English seafarers. King João V was even able to witness these quarrels from his palace window. English diplomats complained of Royal Navy captains accepting contraband gold and diamonds on board their vessels. Furthermore, the frequent announcements in English newspapers of the arrivals of Brazilian gold did not go unnoticed, and such indiscretion irritated both sides (Boxer, 1969: 464-469).
Bullion was not the only commodity that found its way into England illegally. Diamonds had been discovered in Brazil in the early eighteenth century. Their extraction was initially free, and older European trade networks distributing Asian diamonds quickly came to incorporate Brazilian stones (Vanneste, 2011: 50-57; Yogev, 1978, Chapter 7). Diamonds were regularly smuggled in order to avoid the payment of duties. Diplomatic correspondence between English representatives in Lisbon and London shows that English merchants were heavily involved in the diamond contraband trade.7 English merchants active in the Indian diamond trade shipped out jewellery, silver, coral, and polished diamonds, in order to obtain rough diamonds in return (Yogev, 1978). Considering the problematic nature of the remittances of Brazilian gold to London, uncut diamonds proved to be a good alternative, particularly considering their small size and the fact that the trade with Europe was not regulated until the 1750s.8 This fact was acknowledged by the English consul in Lisbon in 1732, when he wrote to the Duke of Newcastle, Secretary of State, saying that “as there is no Law in being against the importing of diamonds, & as they are more easily Concealed than Gold, I believe a great many have escaped the Kings Hands, & indeed less severity has been used for the discovery of them … the Penalty of running Gold is very great, no less than Confiscation, with Banishment or the Gallies.”9
One of the merchants who found himself caught up in diamond contraband was John Bristow. In 1735, he wrote to the Duke of Newcastle, saying that precious stones had been seized on board a ship moored in the Lisbon harbor, and that a lawsuit was pending. Bristow urged the Duke to instruct Lord Tyrawly, the special English envoy to the Portuguese capital, to provide assistance.10 Seventeen years later, in early 1752, the Portuguese customs suspected that Bristow was trying to export bullion on board an English ship, the Lyme. On January 29, a customs officer came to Bristow’s house to inform him of the pending search of the ship for bullion. On the ship, it came to a confrontation with sailors, and guns were drawn.11
English merchants, including John Bristow, complained that the law against bullion exports was old and obsolete and that its enforcement would greatly harm English trade.12 They also expressed their grievances over the aggressive attack on Bristow’s house, in full sight of many people, based on what they claimed was false information. Lord Tyrawly was called upon to resolve the friction that had arisen, but he soon wrote to England that the merchants’ complaints were not substantial enough and that Bristow’s claims had no foundation.13 Furthermore, the merchant committee led by Bristow lost its support among the other members of the English Factory, who labeled them as “silly Ignorant wrongheaded men.”14 Bristow’s actions were seen as “intrigues.”15
Encounters in London: Money & Politics
For John Bristow, however, it did not matter much that he had fallen out of favor with his colleagues. By the time it happened, he could rely on a powerful ally, the Marquis of Pombal. Named Secretary of State for Foreign Affairs and War in July, 1750, Sebastião de Carvalho e Melo quickly set about reforming Portugal’s economic and imperial policy. His desire was to “fortify the nation’s bargaining power within the Atlantic commercial system” (Maxwell, 1995: 67), and to that purpose he used “mercantilist techniques” to create a national bourgeoisie of Portuguese merchants who would eventually have the financial and commercial power to challenge foreign traders (Madureira, 1997b; França, 1983; Carreira 1983).
Pombal’s plans to improve Portugal’s economy through the strengthening of a national commercial class did not, however, exclude the participation of foreign merchants and financers, as long as they served under the Portuguese political order. He was sufficiently pragmatic in his approach not to alienate the English interest in Portugal, and a Portuguese official wrote to Bristow about the smuggling case, saying that the English merchants did not need to fear bad consequences, as he was assured that the Secretary of State was well aware of the commercial interests at stake.16 Three months later, the merchants involved wrote to Lord Tyrawly that Pombal had personally intervened, asserting that the consequences would be dangerous if Portuguese authorities tried to stop the bullion exports to England.17
Pombal’s pragmatic attitude is not so difficult to explain in the light of early modern Anglo-Portuguese relations, but in most historiographic contributions on Pombal’s economic reform, his development of a direct relationship with actual English merchants as a tool in his policy has remained virtually ignored. In order to comprehend Pombal’s efforts to create a strong Portuguese mercantile class in order to assert Portugal’s place in the greater economic system to its own advantage, it is important to understand what steps he took, and which people he decided to rely upon. John Bristow was one of these people, and, considering his background in smuggling, an unlikely but perhaps suitable candidate.
Pombal and Bristow had known each other from the time when they both lived in London, the former as a diplomatic envoy, and the latter as a merchant actively involved in trade with Portugal. When Pombal arrived in the English capital in 1738, the residence of the Portuguese ambassador was in a poor condition. In 1740, he requested permission to move to a more expensive residence, which also had a chapel. The purchase of that house and the necessary repairs would cost 9,000 cruzados. In November, 1740, the decision was made to rent the house, located in Audrey Street. The person who paid the deposit to the landlord was Francis Salvador.18 Salvador was one of the most successful diamond merchants of his day, with a large stake in the trade with India.19 He came from a Sephardic Jewish family that had left Portugal for Amsterdam and London in the seventeenth century, but he had maintained a connection with the Portuguese government. When the discovery of Brazilian diamonds led to a fall in prices on the European markets, Francis Salvador was asked for advice by Portuguese officials. His suggestion to close down the Brazilian mines entirely was taken up by the Crown between 1735 and 1739. In retrospect, Pombal thought this advice came from Salvador’s interests in the Anglo-Indian diamond trade, and he remarked in a later manuscript dealing with a history of the diamond trade that it “brought the famous Hebrew great joy.” A few lines later, he labeled Salvador’s suggestions as “sinister counsel.”20
At the time, however, Portuguese officials seemed content with Salvador’s services, and even sought to use them a second time. In 1738, the Portuguese had to deal with the Maratha invasion of the island of Salsete, off the coast of Bombay. It was the year in which Pombal had taken office in London, and through the intermediation of Salvador, he started talking to the directors of the East India Company, at Salvador’s house.21 The failure of a plan to send English ships to assist the Portuguese settlement on Salsete led Pombal to consider the idea that Portugal should take care of itself, and that a Companhia Geral do Oriente should be founded. To that end, Francis Salvador was instructed to purchase ships on behalf of the Portuguese government.22 In February, 1740, news began to circulate that Francis Salvador had bought the warship Cumberland, which was to sail to Macau.23
Things turned sour when Salvador reclaimed the loans that he had made for obtaining and repairing the Portuguese envoy’s residence. Two repayments were made to him in 1745, and although work on the chapel ended in 1746, Portuguese officials did not make any more efforts to repay Salvador.24 Instead, they complained about the pressure put upon them by the Jewish diamond trader.25 In November, Francis Salvador wrote directly to Pombal to resolve the issue in a friendly manner.26 A month later, nothing had changed: “Francis Salvador persecutes me strongly about delivering the new chapel [for use], for which he has the keys, but I answer him that I will not do so without the orders of Your Excellency.”27 It seems that the lack of haste on the Portuguese side had to do with a shortage of financial means, which had made them turn to Salvador in the first place. In order to obtain the necessary sum, the merchant John Bristow was consulted. After some talks, Bristow was prepared to lend money to the Portuguese government, and, in February, 1747, Francis Salvador was finally reimbursed the sum of £2,417.28
The Diamond Monopoly: Families of Trade, Foreigners of Influence
It seems that Pombal had not forgotten Bristow’s financial help when he came to the latter’s rescue five years later. More than just simply pressuring Portuguese customs into letting the Bristow affair go, Pombal decided to incorporate John Bristow into one of his plans to reshape the Luso-Brazilian commercial system. When the Brazilian diamond mines re-opened in 1739, the business of extraction became a monopoly (Ferreira, 2009). Between 1749 and 1752, it was in the hands of Brazilian-born Felisberto Caldeira Brant. In 1751, however, Caldeira Brant fell from grace as it became known that he was heavily involved in smuggling and corruption. At the same time, Pombal was negotiating at the Lisbon court with the previous contract-holder, Portuguese-born João Fernandes de Oliveira. The latter was awarded the new contract, but Brant’s clandestine activities had led to a failure of credit links that stretched all the way to Lisbon. Pombal decided to change the manner in which Brazilian diamonds were traded in Europe, a practice that had hitherto been free. In a manuscript concerned with the history of the Brazilian diamond trade, he wrote that he gave himself eight days away from the royal court to come up with a plan, a period “in which I could be locked in my office, without being interrupted.”29
Pombal wanted to turn Brazilian diamond trade in Europe into a separate commercial monopoly, and one of his motivations was to construct a Christian trade network in opposition to the Jewish merchants who dominated the diamond trade.30 In his mind, the company that would receive this privilege had to be Dutch or English, in view of their expertise, credit, and capital. The name John Bristow came to the fore. He had already proposed to offer the Portuguese government the 700,000 cruzados they needed to reimburse the protested bills of exchange that had been used by Felisberto Caldeira Brant to obtain credit. In February 1753, almost one year after his involvement in the smuggling of bullion to England, the king agreed to Bristow’s proposal and granted him the monopoly to sell Brazilian diamonds for a period of six years.31 The contract was signed on the tenth of August 1753, and its contents were to remain secret.32 Bristow wasn’t the sole party in the enterprise, as he had established a partnership with the Dutch merchant Herman Joseph Braamcamp, a former representative for Prussia at the Portuguese court, who had tried in vain to obtain the mining monopoly a few years earlier.33 Four days later, news was already spreading in Lisbon. A Huguenot firm wrote to its correspondent in Antwerp, stating that “the House of Bristows Warde & C° it is said is contracting some say for the Brazeel diamonds that are actually in the King’s coffers in the mint of the old contract and others say that tis for thoze that are to come of the new.”34
It seems that Pombal’s pragmatism in relation to the English had catapulted a smuggler at odds with his fellow merchants into a very powerful position. But Bristow’s luck would not last for long. News reached Pombal that Bristow’s company had associated itself with the Jewish trader Francis Salvador.35 Pombal disliked the Jews, and held a personal grudge against Salvador. To make matters worse, the earthquake that struck Lisbon in 1755 led directly to the bankruptcy of Bristow, Warde & C°, and Bristow could no longer fulfill his contractual obligations, leading to a termination of the contract in 1756. Portuguese officials were instructed to look for merchants in London who could take over the contract, which would remain in foreign hands until the end of the century.
Together with a sustained Anglo-Dutch presence in the diamond trade, Pombal also incorporated Portuguese merchants into the administration of the diamond contracts, a strategy that was very much in line with his attempts to create a national commercial bourgeoisie to occupy key positions in Portuguese industries and trade monopolies. José Francisco da Cruz and José Rodrigues Bandeira were appointed as diamond administrators in Lisbon,36 positions that their families continued to hold until the end of the eighteenth century (Rodrigues, 1982: 19). Pedro Quintela became the monopoly holder in 1790, purchasing 158,168 carats of diamonds between 1791 and 1800 (Rabello, 1997: 177). Members of these three families represented 29 per cent of the financial value of investments in Brazilian diamonds to be paid back in 1770.37 These families were part of the grupo dos tabaqueiros, people involved in the tobacco contract, including also the Ferreira, Fernandes Bandeira, Machado, Braamcamp and other families (Pedreira, 1996: 361; Costa, 1992; França, 1983). They were also involved in the junta do comércio, the Pernambuco and Grão Pará & Maranhão trading companies, and several Lisbon factories (Madureira, 1997a; Maxwell, 1995: 74-75; Nunes Dias, 1968: 37; Macedo, 1951: 141-143).38
Brazilian diamonds were an important commercial connection between this Portuguese commercial oligarchy and a number of foreign traders, but these groups also became linked socially, through marriage. Anselmo José da Cruz, the brother of the diamond administrator José Francisco, had a daughter who married Geraldo Wenceslão Braamcamp, the son of Herman Braamcamp, who had been involved in the first diamond trade monopoly together with John Bristow, and was a director in the Pernambuco Company (Maxwell, 1995: 74). Geraldo later became the first Baron of Sobral (Affonso and Valdez, 1988: 350). Daniel Gildemeester, the holder of the diamond monopoly between 1761 and 1783 had a son who married Maria Teresa Machado. Earlier, Daniel had opposed the marriage of one of his sons in a conversation with a Quintela (Bombelles, 1979: 129-130).
This confirms that foreign merchants could rise to important commercial positions within the Portuguese economy, but not without becoming linked to a cosmopolitan Portuguese bourgeoisie. A simple dichotomy in which Pombal tried to push Portuguese merchants into influential commercial positions at the expense of foreign traders cannot be maintained as an explanation. While Pombal’s policies need to be inscribed in a wider historical context in which Portugal’s commercial dependency on England was questioned, it is also important to acknowledge Pombal’s pragmatism and his readiness to include contacts from his English days in the circles of his political and economic accomplices.
Conclusion
Historians studying the development of the Portuguese economy in the eighteenth century have rightly stressed the importance of the historical context of Anglo-Portuguese relations and the cataclysm of 1755, which led to a Portuguese diplomat becoming the most prominent figure in the higher echelons of government. While there is no doubting the importance of analyzing these events within a wider international economic context, a micro-historical approach that looks at relationships between people rather than at relationships between economic and political structures can be equally revealing.
The Marquis of Pombal’s efforts to build a Portuguese merchant class ready to compete with foreign traders in terms of finance, commercial power and industrial development were a logical consequence of re-thinking the century-old relationship between England and Portugal within a “mercantilist” framework. It should, however, be remembered that Pombal was pragmatic enough to understand that the expansion of the Portuguese economy on these terms could never take place without preserving the Anglo-Portuguese relationship, and without benefitting from the personal experience and capital of a number of well-to-do foreign merchants. It remains remarkable, however, that his decision to rely upon English commercial expertise brought Pombal to John Bristow, a known contrabandist. This cannot be satisfactorily explained within the traditional narrative of Anglo-Portuguese relations. Looking at personal relationships hints at an interdependency of foreign and Portuguese merchants that was economic, political and socio-cultural. Much of this came to fruition because of Pombal’s English past, and the connections he had maintained from that period.
References
Affonso, Domingos Araujo and Valdez, Ruy Dique Travassos (1988). Livro de Oiro da Nobreza. Lisbon: Telles da Sylva: Vol. 3.
Antonil, André João (1967) [1711]. Cultura e Opulência do Brasil, A.P. Canbrava (ed.). São Paulo: Editora Nacional.
Azevedo, João Lúcio (1922). O marquês de Pombal e sua época. Rio de Janeiro: Annuario do Brasil.
Bombelles, Marquis de (1979). Journal d’un ambassadeur de France au Portugal (1786-1788), R. Kann (ed.). Paris: Presses Universitaires.
Boxer, Charles Ralph (1969). “Gold and British Traders in the First Half of the Eighteenth Century”. The Hispanic American Historical Review, 49 (3): 454-472.
Boxer, Charles Ralph (1962). The Golden Age of Brazil – Growing Pains of a Colonial Society 1695-1750. Berkeley; Los Angeles; London: University of California Press.
Brandão, Fernando de Castro (2002). História Diplomática de Portugal – uma cronologia. Lisbon: Livros Horizonte.
Cardoso, José Luís (2013). “Jorge Borges de Macedo: Problems of the History of Portuguese Economic and Political Thought in the Eighteenth Century”. E-journal of Portuguese History, 11 (2): 93-100.
Cardoso, José Luís (2005). “Politica Econômica". In Pedro Lains and Álvaro F. Silva (eds.), Histôria Econômica de Portugal 1700-2000. Lisbon: Imprensa de Ciências Sociais, 2005, Vol. 1, 345-367.
Cardoso, José Luís et al. (eds.)(2003). O Tratado de Methuen, 1703: diplomacia, guerra, política e economia. Lisbon: Livros Horizonte.
Carreira, António (1983). As Companhias Pombalinas. Lisbon: Ed. Presença.
Costa, Leonor Freire, Rocha, Maria Manuela and Sousa, Rita Martins de (2010). Brazilian Gold in the eighteenth century: a reassessment (Working Paper N°42). Lisbon: GHES.
Costa, Fernando Dores (1992). “Capitalistas e serviços: empréstimos, contratos e mercês no final do século XVIII”. Análise Social, 28 (116-117): 441-460.
Dickson, Peter George Muir (1993). The Financial Revolution in England – A Study in the Development of Public Credit 1688-1756. Aldershot: Gregg Revivals.
Ferreira, Rodrigo de Almeida (2009). O descaminho dos diamantes; relações de poder e sociabilidade na demarcação diamantina no período dos contratos (1740-1771). Belo Horizonte: FUMARC/Letra & Voz.
Fisher, H.E.S. (1981). “Lisbon, its English merchant community and the Mediterranean in the eighteenth century”. In Philip L. Cottrell and Derek H. Aldcroft (eds.), Shipping, Trade and Commerce – Essays in memory of Ralph Davis. Leicester: Leicester University Press, 23-44.
Fisher, H.E.S. (1971). The Portugal Trade – A Study of Anglo-Portuguese Commerce 1700-1770. London: Methuen & Co Ltd.
Fisher, H.E.S. (1963). “Anglo-Portuguese Trade, 1700-1770”. Economic History Review, New Series, 16 (2): 219-233.
França, José Augusto (1983). “Burguesia pombalina, nobreza mariana, fidalguia liberal”. In M.H.C. dos Santos and L. de Albuquerque (eds.), Pombal Revisitado: Comunicações ao Colóquio Internacional. Lisbon: Ed. Estampa, Vol. 1, 17-33.
Francis, David (1985). Portugal 1715-1808 – Joanine, Pombaline and Rococo Portugal as seen by British Diplomats and Traders.London: Tamesis Books Ltd.
Godinho, Vitorino Magalhães (2005). “Portugal and the Making of the Atlantic World: Sugar Fleets and Gold Fleets, the Seventeenth to the Eighteenth Centuries”. Review (Fernand Braudel Center), 28 (4): 313-337.
Godinho, Vitorino Magalhães (1953). “Portugal, As Frotas do Açúcar e as Frotas do Ouro (1670-1770)”. Revista da História, 7 (15): 69-88.
Higgins, Kathleen J. (1999). “Licentious Liberty” in a Brazilian Gold-Mining Region – Slavery, Gender, and Social Control in Eighteenth-Century Sabará, Minas Gerais. University Park: The Pennsylvania State University Press.
Hespanha, António Manuel (2007). “A note on two recent books on the patterns of politics in the 18th century”. E-journal of Portuguese History, 5 (2): 1-9.
Lisboa, M.E. (2009). O Solar do Morgado de Alagoa: os Irmãos Cruz e os Significados de um Património Construído (Segunda Metade do Século XVIII). Lisbon: Edições Colibri.
Lodge, Richard (1933). “The English Factory at Lisbon: Some Chapters in Its History”. Transactions of the Royal Historical Society, Fourth Series, 16: 211-247.
Macedo, Jorge Borges de, 1966. “Mercantilismo”. In Joel Serrão (ed.), Dicionário de História de Portugal. Lisbon: Iniciativas Editoriais, Vol. 3, 35-39.
Macedo, Jorge Borges de, 1963. Problemas de História da Indústria Portuguesa no Século XVIII. Lisbon: Associação Industrial Portuguesa.
Macedo, Jorge Borges de, 1951. A Situação Económica no Tempo de Pombal. Alguns Aspectos. Lisbon: Portugália Editora.
Madureira, Nuno Luís (1997a). “A “sociedade civil” do Estado. Instituições e grupos de interesses em Portugal (1750-1847)”. Análise Social. 32 (142): 603-624.
Madureira, Nuno Luís (1997b). Mercado e Privilégios. A Indústria Portuguesa entre 1750 e 1834. Lisbon: Ed. Estampa.
Maxwell, Kenneth R. (1995). Pombal – Paradox of the Enlightenment. Cambridge: Cambridge University Press.
Maxwell, Kenneth R. (1968). “Pombal and the Nationalization of the Luso-Brazilian Economy”. The Hispanic American Historical Review, 48 (4): 608-631.
Monteiro, Nuno Gonçalo (2006). D. José: na sombra de Pombal. Lisbon: Círculo dos Leitores.
Müller, Titus (2010). Die Jesuitin von Lissabon. Berlin: Rütten & Loening.
Nunes Dias, Manuel (1968). “Os acionistas e o capital social da Companhia do Grão Pará e Maranhão (Os dois momentos: o da fundação (1755-1758) e o da véspera da extinção (1776)”. Cahiers du monde hispanique et luso-brésilien, 11: 29-52.
Pedreira, Jorge M. (2003). “Diplomacia, manufacturas e desenvolvimento económico. Em torno do mito de Methuen”. In J.L. Cardoso et al. (eds.) O Tratado de Methuen, 1703: diplomacia, guerra, política e economia. Lisbon: Livros Horizonte.
Pedreira, Jorge M. (1996). “Tratos e contratos: actividades, interesses e orientações dos investimentos dos negociantes da praça de Lisboa (1755-1822)”. Análise Social, 31 (136-137): 355-379.
Pedreira, Jorge M. (1994). Estrutura industrial e mercado colonial. Portugal e Brasil (1780-1830). Lisbon: DIFEL.
Pereira, Alvaro S. (2009). “The Opportunity of a Disaster: the Economic Impact of the 1755 Lisbon Earthquake”. The Journal of Economic History, 69 (2): 466-499.
Pinto, Virgílio Noya (1979). O ouro brasileiro e o comércio anglo-português. São Paulo: Editora Nacional.
Postlethwayt, Malachy (1774). The Universal Dictionary of Trade and Commerce. Vol. 2. London.
Richards, John F. (2003). The Unending Frontier: An Environmental History of the Early Modern World. Berkeley: University of California Press.
Rodrigues, Eduardo Gonçalves (1982). Pombal e a Questão dos Diamantes. Brotéria, 115 (2-4): 209-238.
Shaw, L.M.E. (1998). The Anglo-Portuguese Alliance and the English Merchants in Portugal, 1654-1810. Aldershot: Ashgate.
Sideri, Sandro (1970). Trade and Power: Informal Colonialism in Anglo-Portuguese Relations. Rotterdam: Rotterdam University Press.
Sousa, Rita Martins de (2006). Moeda e Metais Preciosos no Portugal Setecentista (1688–1797). Lisbon: INCM.
Sousa, Rita Martins de (2000). O Brasil e as emissões monetárias de ouro em Portugal (1700-1797). Penélope, 23: 89-107.
Subtil, José (2007). O terramoto politico (1755-1759). Memória e poder. Lisbon: UAL.
Sutherland, Lucy (1984a). “The Accounts of an Eighteenth-Century Merchant: the Portuguese Ventures of William Braund”. In Aubrey Newman (ed.), Politics and Finance in the eighteenth century – Lucy Sutherland. London: The Hambledon Press, 365-385.
Sutherland, Lucy (1984b). “Samson Gideon: eighteenth century Jewish financier”. In Aubrey Newman (ed.), Politics and Finance in the eighteenth century – Lucy Sutherland. London: The Hambledon Press, 386-398.
Vanneste, Tijl (2011). Global Trade and Commercial Networks: Eighteenth-Century Diamond Merchants. London: Pickering & Chatto.
Walford, A.R. (1940). The British Factory in Lisbon & its closing stages ensuing upon the treaty of 1810. Lisbon: Instituto Britânico em Portugal.
Woolf, Maurice (1962-1967). “Joseph Salvador 1716-1786”. Transactions and Miscellanies of The Jewish Historical Society of England, 21: 104-137.
Yogev, Gedalia (1978). Diamonds and Coral. Anglo-Dutch Jews and Eighteenth Century Trade. Leicester: Leicester University Press.
Notes
1 Université Paris 1 Panthéon-Sorbonne, France. E-mail: tijl.vanneste@eui.eu. I wish to thank the anonymous referees for their suggestions.
2 The English cemetery, established in 1717, is located in Rua São Jorge in the southwest area of Lisbon.
3 See
-john-1701-68 (consulted on 16/09/2015).
4 Entry on 21/03/1749, Court Minute Book 63 E.I.C., British Library/India Office Records (BL/IOR), B/70, ff. 631-633. For Gideon, see Sutherland (1984b). Francis Salvador and his relationship with Bristow and the Portuguese government are discussed below.
5 Shaw (1998: 89). He figures as a character in a novel set in the earthquake-stricken Lisbon of 1755 (Müller, 2010).
6 The ban on the export of domestic specie is also noticeable in the shipments of silver made by diamond merchants to India. All the silver was foreign. Court Minute Books 60-68 E.I.C. (1742-1760), BL/IOR, B/67-B/75. For the profitability of bullion trade with Portugal, see Sutherland (1984a).
7 For a description of a number of cases and the role played therein by the special envoy Lord Tyrawly, see Francis (1985: 55-62).
8 However, the mining had been organized as a monopoly since 1739 (Vanneste, 2011: 51).
9 Charles Compton to the Duke of Newcastle, 12/01/1732, Lisbon, National Archives, State Papers (NA/SP) 89/37, ff. 141-145, on f. 142r.
10 Duke of Newcastle to Lord Tyrawly, Whitehall, 28/10/1735, NA/SP 89/36, ff. 190-192.
11 A narrative of the reasons, which constrained the underwritten Bristow’s, Warde & Company to the Committee of the Factory (no minister or consul from His Majesty being then resident here) for their assistance and support in so critical a situation, Lisbon, 22/06/1752, BL, Add. 23634 (Tyrawly Papers; Correspondence of Lord Tyrawly when ambassador in Portugal, 1752-1757), ff. 86-87.
12 Committee of the English factory to Lord Tyrawly, Lisbon, 22/06/1752, BL, Add. 23634, ff. 104-105.
13 Lord Tyrawly to Lord Holderness, Lisbon, 25/06/1752, BL, Add. 23634, ff. 106-111.
14 Johan Mayne, Isaac Mollortie, Tom Chace, Phillip Jackson, S.l., S.d., BL, Add. 23634, f. 159.
15 Ibid.
16 Antonio Luiz de Oliveira to John Bristow, Lisbon, 18/03/1752, BL, Add. 23634, f. 38.
17 A narrative, Lisbon, 22/06/1752, BL, Add. 23634, ff. 86-87.
18 Antonio Guedes Pereira and Marco Antonio de Azevedo Coutinho to Sebastião José de Carvalho e Melo, Lisbon, 20/02/1742, BL, Add. 20800 (Cartas diplomaticas de Lisboa para Londres; 1738-1745), ff. 342-43.
19 For more on the Salvador family, see Woolf (1962-1967) and Vanneste (2011: 124-139 and 150-162).
20 “cauzou ao dito Famozo Hebreo huma grande alegria” and “os seus sinistros conselhos,” Deducçaó Compendiosa dos Contractos de Mineraçaó dos diamantes; dos outros contractos da Extracçaó delles; dos cofres de Lisboa para os Payzes Estrangeiros; dos perigos em que todos laboravam e das Providencias, comque a elles occorreo o senhor Rey Dom Jozeph para os conservar, S.d., Biblioteca Nacional Lisbon, Colecção Pombalina (BNL/CP), Códice 695, ff. 306-80, on f. 311r.
21 Sebastião José de Carvalho e Melo, London, 02/12/1738. BL, Add. 20798 (Cartas diplomaticas de Londres para Lisboa 1738-1739. Cartaz de oficio ao Secretario do Estado e scriptaes por Sebastiaó Joze de Carvalho e Mello desde a cidade de Londres no anno de 1738), ff. 26-28. See also the letters from 21/11/1738 and 15/12/1738.
22 “Com Francisco Salvador continuey as conversaçoes depois de o conhecer intereçado na Companhia,” Sebastião José de Carvalho e Melo, London, 20/01/1739, BL, Add. 20798, ff. 85-87.
23 Antonio Guedes Pereira and Marco Antonio de Azevedo Coutinho to Sebastião José de Carvalho e Melo, Lisbon, 19/02/1740, BL, Add. 20801 (Cartas para Londres 1738-1742), ff. 5-7.
24 Francisco Caetano to Sebastião José de Carvalho e Melo, London, 30/12/1745, BL, Add. 20797 (Cartas de Londres 1745-1747), ff. 83-85.
25 Francisco Caetano to Sebastião José de Carvalho e Melo, London, 12/08/1746, BL, Add. 20797, ff. 178-179. See also the letters of 16/08/1746 on ff. 180-181, 23/08/1746 on ff. 192-193 and 16/09/1746 on f. 197.
26 Francis Salvador to Sebastião José de Carvalho e Melo, London, 04/11/1746, BL, Add. 20797, f. 218.
27 “Francisco Salvador me persegue fortemente para que tome entregue da Capella Nova, de que elle tem as chaves, porem eu lhe respondo que o naó faço sem ordem de V.E.,” Francisco Caetano to Sebastião José de Carvalho e Melo, London, 09/12/1746, BL, Add. 20797, ff. 229-30.
28 Francis Salvador to Sebastião José de Carvalho e Melo, London, 16/02/1747, BL, Add. 20797, ff. 257-58.
29 Deducçaó, f. 324r..
30 Deducçaó, f. 326v.
31 Deducçaó, f. 330v.
32 Alvará de 11/08/1753, Arquivo Nacional da Torre do Tombo, Lisbon, Collecção de Leis, Maço 4, N°144.
33 Deducçaó, ff. 324r-326v. See also Brandão 2002: 144
34 Berthon & Garnault to James Dormer, Lisbon, 14/08/1753, Felixarchief Antwerp, IB1652. They confirmed Bristow’s involvement in the contract as well as the participation of Herman Braamcamp’s brother Gerard in Amsterdam on 09/10/1753.
35 Deducçaó, f. 337v
36 Deducçaó, f. 346r.
37 Letras sobre o contrato dos diamantes que há para pagar, e dias dos seus vencimentos (1770). BNL/PB, Códice 691, f. 18.
38 On Anselmo José da Cruz and his two brothers, see Lisboa (2009).
Received for publication:15 September 2014
Accepted in revised form: 6 November 2015
Recebido para publicação: 15 de Setembro de 2014
Aceite após revisão: 6 de Novembro de 2015
2015, ISSN 1645-6432
e-JPH, Vol. 13, number 2, December 2015
|
http://www.brown.edu/Departments/Portuguese_Brazilian_Studies/ejph/html/issue26/html/v13n2a05.html
|
CC-MAIN-2018-26
|
refinedweb
| 6,641
| 55.13
|
The logging API provides multiple levels of reporting and the ability to change to a different level during program execution. Thus, you can dynamically set the logging level to any of the following states:
Level
Effect
Numeric Value
OFF
No logging messages are reported.
Integer.MAX_VALUE
SEVERE
Only logging messages with the level SEVERE are reported.
1000
WARNING
Logging messages with levels of WARNING and SEVERE are reported.
900
INFO
Logging messages with levels of INFO and above are reported.
800
CONFIG
Logging messages with levels of CONFIG and above are reported.
700
FINE
Logging messages with levels of FINE and above are reported.
500
FINER
Logging messages with levels of FINER and above are reported.
400
FINEST
Logging messages with levels of FINEST and above are reported.
300
ALL
All logging messages are reported.
Integer.MIN_VALUE.
You can see the effect of trying out the different levels of logging in the following example:
//:).
You can have multiple logger objects in your program, and these loggers are organized into a hierarchical tree, which can be programmatically associated with the package namespace. Child loggers keep track of their immediate parent and by default pass the logging records up to the parent.
The root logger object is always created by default, and is the base of the tree of logger objects. You get a reference to the root logger by calling the static method Logger.getLogger(""). Notice that it takes an empty string rather than no arguments..
|
http://www.linuxtopia.org/online_books/programming_books/thinking_in_java/TIJ317_014.htm
|
CC-MAIN-2014-10
|
refinedweb
| 245
| 56.15
|
I have made some progress on my iPhone game.
Wooooaaah, there, careful, I did tell you to sit down. Not much progress, granted, but thanks to Baby Cobra sleep, a late train, being deserted by my train friends a couple of times (fine, they all had excuses, but I choose to feel deserted for dramatic effect) and an evening or two’s button pressing I have managed to fit in a staggering eight hours of progress. Yupperillo, that’s a day’s worth of achievement. The classic cobra-artist’s rendition of the Celebreight, to the right, is there to bang in just how magnificent this occasion is. It is, in fact, to a reasonable approximation, infinity times more progress than was made between May 2011 and the start of June 2012 so you can understand my excitement. Let’s run the numbers through the Progressatronic-6000 Deluxe Schedulometer and see what pops out:
I is lerrning fings
The progress has mainly been with the map editor. This is now pretty much finished. I had to make a lot of adjustments and I rewrote large chunks of it. It’s now slim, fast, reliable and is almost certainly in better health than I am. I learnt a lot, including:
- OpenGL for the map editor was a poor choice. I used it because it got things going quickly: I had an OpenGL rendering engine for the game itself and it compiled and worked under OSX pretty much first time. I simply changed a few “UIxxx” classes to “NSxxx” and Bob’s my uncle. However, given I’m rendering some lines, circles, text and blocks in a non-performance sensitive 2D environment, OpenGL is massively over-complicated. It’s like ordering a fleet of taxis so that each of the items in my bag can be carried to the station separately. More on this in a bit.
- C++ 11 is g-g-g-grrrrreat, but needs iOS 5 as a target. It’s amazing how quickly you get used to the little time-savers that C++ 11 offers you. I love the range based for-loops. I adore the little touches like auto (type inference) and scoped enums.
std::initialiser_listmakes me warm inside. The ability to delete the automatically generated things like copy constructors and what-nots from your classes to better help the compiler to help you not fuck things up by accident is sex on wheels. These features have seeped into my standard day-to-day programming and I feel naked (but not greased up) without them. Learning that I need to target my iPhone game at iOS 5 to use these features was a minor disappointment but won’t really have any impact given we’ll be on iOS 42.6 by the time I’m done.
- If you need 0,0 at the top left, it’s easy. You need to do two things. One is add the one line of code I added in last month’s update and the other is to flip your NSImages that contain the stuff you’re rendering onto the NSView otherwise they render upside down. You do that like this:
NSImage* blocksImage = [[NSImage alloc] initWithContentsOfFile:@"My_Awesome_Image"]; [blocksImage setFlipped:YES];
This works in concert with NSView’s
isFlipped { return YES; }jazzamatazz.
- Sprite-sheets using NSImage are easy-peasy! Despite an uncharacteristically unhelpful answer on Stack Overflow1, it is easy to render sprites from an NSImage sprite sheet. Quite why anyone would even suggest cutting out the individual images perplexes me, but perhaps that individual enjoys making huge amounts of unnecessary work for themselves. Here’s some code that does it:
// Target rect is the NSView screen position. Shrink 'targetBlockSize' to // do neat scaling: NSRect targetRect; targetRect.origin.x = x; targetRect.origin.y = y; targetRect.size.width = targetBlockSize; targetRect.size.height = targetBlockSize; // // Source rect is on the sprite sheet. This assumes that the sheet is 8 x 8 of // 64 x 64 sprites. "blockIndex" is a 0 -> X index of which sprite we want. NSRect sourceRect; sourceRect.origin.x = (float)((blockIndex % 8) * 64); sourceRect.origin.y = (float)((blockIndex / 8) * 64); sourceRect.size.width = 64.0f; sourceRect.size.height = 64.0f; // // Draw block and selection rectangle if required. *This* is the magic line: [blocksImage drawInRect:targetRect fromRect:sourceRect operation:NSCompositeCopy fraction:1.0]; if (blockIndex == (unsigned int)selectedBlockIndex) { // This just shows a yellow un-filled rectangle around the block that's the // selected one: [[NSColor yellowColor] setStroke]; targetRect.origin.x += 1.0f; targetRect.origin.y += 1.0f; targetRect.size.width -= 2.0f; targetRect.size.height -= 2.0f; [NSBezierPath strokeRect:targetRect]; } // if (this was the selected block)
The magic there was done with NSImage’s
drawInRect. You can also use this to scale the target – I use it in the editor to allow zoom and “small mode” for the map block choice. In a nutshell, it lets you take a rectangle of your choice out of your source NSImage and blit it to a target rectangle of your choice in your NSView. Nice, eh? Well, I thought it was.
- Fat fingers are fat. Yeah, and that means yours too. When you press on the simulator, you do so with a mouse pointer. If you detect presses as being a touch down followed by a touch up with no dragging between the two, that code will work wonderfully in the simulator. On the device, however, your fingers are not as accurate. You need to have a threshold of movement that doesn’t count as a drag by still counts as a press. I realised this after friends trying my prototype on my iPhone had all sorts of difficulties. My “well, it works on the simulator” is not a helpful response and just leads to a conversation that ends with “are you saying my fingers are fat?”
- Tags are great! If you have a whole stack of checkboxes, how do you tell them apart? Up until now I was either 1) linking an outlet to each control and doing the comparison to
senderin a single IBAction or 2) have an IBAction for each button. On Win32, I’d have hidden an ID in
GWL_USERDATAand avoided both of these pox riddled solutions. Turns out, of course, Cocoa has an equivalent but it’s called something different: the ‘tag’. You can set this in IB and read it in a single IBAction dispatcher with code like
if (1 == [sender tag]) { yodel_continuously(); }I’m kicking myself for not knowing this before given how long I’ve been Cocoaing but better late than never.
- Tool windows need to accept first mouse. The utility windows in the editor needed one click to activate and another to do something. This is an awful user experience in an editor, particularly with the block selection view. What I was missing was this little piece of code in the relevant views:
-(BOOL)acceptsFirstMouse:(NSEvent *)theEvent { return (YES); }
With this, everything worked as I expected.
The net result of all this is something that looks pretty good. Here’s a poorly taken screenshot of a poorly designed level I was testing with recently:
I’m not quite ready to say what the game is all about or what it is called, but soon. Of course, with my two readers, it’s not as if I’m leaking anything substantial, it’s just a… thing.
Note the class-A program icon bottom left. I drew that. I’m so proud. EGADASI2 (pronounced ‘eggadassie’) as only I say after close to the LD50 of wine.
A cake was involved
The icing on the cake (oh, and I baked a cake. Not as good as this one but any cake is a win) with this update is that the levels in this editor can be now loaded and played in the game itself. The game code was a bit messy, which is not enormously surprising given how long ago the core of it was written and what I knew about Cocoa at the time, and I’ve had to spend some time chopping down the weeds before I can plant anything nice but they are playable. I’ve fixed some issues, tidied some bits and pieces and generally brushed away some code cobwebs in preparation for the remaining features.
What are the things left before I have an actual product I could possibly stick on the App Store? Quite a lot, unfortunately:
- A whole stack of graphics. Mostly glue graphics: splash screens, menu panels, etc., but also some additional blocks and entities.
- Some sound. I need some sound effects and perhaps the odd little jingle for start of level, end of level and the title screen.
- Levels, lots of levels! I really need a good twenty levels to feel comfortable about having a first release. I have two so far. Yes, two. But at least I have a half decent editor now.
- Some code. The engine needs bad guys code, parallax rejigging, game centre integration and a whole pile of little things. About three week’s worth of little things, in fact.
I don’t quite know how I’m going to get all this done given I can only really pay in roast dinners and wine these days. As delicious as they both are, they don’t pay people’s bills. I’m sure I’ll come up with something, though, and it does at least feel good to have made progress and to make an update that doesn’t involve a comprehensive package of excuses.
Yey!
–
1 I must get around to creating an account there. Given how much time I spend on Stack Overflow and how important it has become it seems offish for me not to contribute on the rare occasion that I feel that I can.
2 Every Good Application Deserves A Splendid Icon
|
https://cobrascobras.com/2012/06/25/comfy-sofa-stat/
|
CC-MAIN-2021-31
|
refinedweb
| 1,639
| 72.16
|
My initial thoughts on the default constructor are that it was called automatically. From what I can see, it seems that this is so but a constructor called automatically doesn't initialize int or char variables for exampe, to Zero as I originally thought. It appears that they initialize the variable with a garbage value or something associated with the memory location.
**Is it true that the default constructor (if you don't provide a constructor yourself) is called automatically though does not initialize the class variables to any meaningful value? ** I initially thought that it would initialize class variables to zero but it seems that I am wrong on that. It could be a floor in my inderstanding. My code below provides three garbage values for the output if I don't initialize them.
#include <iostream> #include <cstdlib> using namespace std; class test { public: char a,b,c; char testfunct(char d, char e, char f) { a = d; b = e; c = f; } }; int main() { test classtest; cout << classtest.a << "\n"; cout << classtest.b << "\n"; cout << classtest.c << "\n"; system("pause>nul"); }
Edited by daino: correction
|
https://www.daniweb.com/programming/software-development/threads/433289/default-constructor
|
CC-MAIN-2017-17
|
refinedweb
| 186
| 63.8
|
Template in C++is a feature. We write code once and use it for any data type including user defined data types. For example, sort() can be written and used to sort any data type items. A class stack can be created that can be used as a stack of any data type.
What if we want a different code for a particular data type? Consider a big project that needs a function sort() for arrays of many different data types. Let Quick Sort be used for all datatypes except char. In case of char, total possible values are 256 and counting sort may be a better option. Is it possible to use different code only when sort() is called for char data type?
It is possible in C++ to get a special behavior for a particular data type. This is called template specialization.
// A generic sort function template <class T> void sort(T arr[], int size) { // code to implement Quick Sort } // Template Specialization: A function // specialized for char data type template <> void sort<char>(char arr[], int size) { // code to implement counting sort }
Another example could be a class Set that represents a set of elements and supports operations like union, intersection, etc. When the type of elements is char, we may want to use a simple boolean array of size 256 to make a set. For other data types, we have to use some other complex technique.
An Example Program for function template specialization
For example, consider the following simple code where we have general template fun() for all data types except int. For int, there is a specialized version of fun().
#include <iostream> using namespace std; template <class T> void fun(T a) { cout << "The main template fun(): " << a << endl; } template<> void fun(int a) { cout << "Specialized Template for int type: " << a << endl; } int main() { fun<char>('a'); fun<int>(10); fun<float>(10.14); }
Output:
The main template fun(): a Specialized Template for int type: 10 The main template fun(): 10.14
An Example Program for class template specialization
In the following program, a specialized version of class Test is written for int data type.
#include <iostream> using namespace std; template <class T> class Test { // Data memnbers of test public: Test() { // Initialization of data members cout << "General template object \n"; } // Other methods of Test }; template <> class Test <int> { public: Test() { // Initialization of data members cout << "Specialized template object\n"; } }; int main() { Test<int> a; Test<char> b; Test<float> c; return 0; }
Output:
Specialized template object General template object General template object
How does template specialization work?
When we write any template based function or class, compiler creates a copy of that function/class whenever compiler sees that being used for a new data type or new set of data types(in case of multiple template arguments).
If a specialized version is present, compiler first checks with the specialized version and then the main template. Compiler first checks with the most specialized version by matching the passed parameter with the data type(s) specified in a specialized version.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
Recommended Posts:
- Templates in C++
- The C++ Standard Template Library (STL)
- Sort in C++ Standard Template Library (STL)
- Template Metaprogramming in C++
- Vector in C++ STL
-.
|
https://www.geeksforgeeks.org/template-specialization-c/
|
CC-MAIN-2018-34
|
refinedweb
| 558
| 57.2
|
Hi, I've got a few (9) random questions, mainly about monads and building monads from existing monads, partly trying to confirm conclusions I've come to through experimentation. Any, and all, attempts to enlighten me will be much appreciated. Thanks Daniel First, terminology. In StateT s (ReaderT r IO) () Q. 1) StateT is referred to as the outermost monad, and IO as the innermost monad, correct? Using a monadic function, eg MonadReader.ask, in a monadic expression will access the outermost monad of the appropriate class. Q. 2) Does this work for all monad classes in all expressions? How does Control.Monad.Trans.lift work? It seems that a single application of lift will find the next outermost monad of the appropriate class, but if you want to dig deeper into the nest you need to apply lift according to the monads actual depth in the nest. Q. 3) Why the different behaviour? Q. 4) Is it possible to give a type to the lifted function so that the monad of the correct class _and_ type is used? E.g. dig into a String Reader rather than an Int Reader. Defining an instance of MonadTrans for a monad instance seems universally useful. Q. 5) Are there obvious situations where it's not useful or possible? Carrying out IO in a nested monadic expression requires liftIO. Apart from having to type an extra 7-9 characters it seems good to use liftIO even in plain IO monad expressions so they can become nested expressions with no trouble later on. Q. 6) Is it safe to always use liftIO, even in plain IO monad? Q. 7) If it's safe to do, why aren't functions in the IO monad just typed in the MonadIO class instead? It looks to me like types with class constraints are better than types specifying nests of monad instances. So g :: (MonadReader String m, MonadState Int m, Monad m) => m () is better than g :: StateT Int (Reader String) () because you can change the instance of the monadic class at will. Also you can change the nesting order of the monads, though maybe that's not useful in practice. The disadvantage seems to be that you can't use lift to access nested monads. Q. 8) Is it possible to get access to nested monads when using class constraint types? In the following code, the test2 function is not valid because there is no instance for (MonadCounter (ReaderT [Char] (StateT Word IO))), which is a fair enough complaint. Q. 9) What allows ReaderT and StateT to be nested in arbitrary order but not ReaderT and CounterT? Especially given CounterT is actually a StateT. class (Monad m) => MonadCounter m where increment :: m Word decrement :: Word -> m () type Counter = State Word instance MonadCounter Counter where increment = increment_ decrement = decrement_ runCounter :: Counter a -> a runCounter c = evalState c 0 type CounterT m = StateT Word m instance (Monad m) => MonadCounter (CounterT m) where increment = increment_ decrement = decrement_ runCounterT :: (Monad m) => CounterT m a -> m a runCounterT c = evalStateT c 0 increment_ :: (MonadState Word m) => m Word increment_ = do w <- get put (w + 5) return w decrement_ :: (MonadState Word m) => Word -> m () decrement_ w = do curW <- get if w > curW then put 0 else put (curW - w) return () test1 :: IO () test1 = runReaderT (runCounterT bar) "blah" --test2 :: IO () --test2 = runCounterT (runReaderT bar "blah") bar :: (MonadReader String m, MonadCounter m, MonadIO m) => m () bar = do w <- increment s <- ask liftIO $ putStrLn $ (show w) ++ s return ()
|
http://www.haskell.org/pipermail/haskell-cafe/2006-March/015036.html
|
CC-MAIN-2013-48
|
refinedweb
| 582
| 59.33
|
The field of economics is not particularly known for its generosity, so an academic paper might not be the first place you turn to when choosing a gift for a friend or loved one..
Ironically, the study finds that we're awful gift-givers precisely because we spend too much time trying to be considerate. We imagine our friends opening a gift that is impressive, expensive, and sentimental. We imagine the look of delirious happiness and surprise on their faces ("You really know me! This was so thoughtful!") and the warmth we feel in return ("Yeah, I do! Yes, I thought a lot about it!"). But there's something that the most sentimental gift-givers tend to not think too much about: Whether the gift is practical in the first place.. Their clever paper asks givers and recipients to rate gifts along two metrics: desirability (i.e.: the quality of a restaurant, the cost of a coffee maker, the visual complexness of the video game) and feasibility (i.e.: the proximity of that restaurant, the ease of the coffee maker, the learning curve of the video game). Across several experiments, they find that givers consistently give gifts based on desirability and recipients consistently favor gifts based on feasibility.
For example, given the choice between buying somebody a gift card at an expensive Italian restaurant that’s far away and buying a gift card to a well-rated restaurant that is nearby, givers consistently went for the luxury restaurant, while receivers in the study said they preferred the place closer to home. The same was true for coffee makers: Givers said they wanted to buy the most expensive; recipients said they just wanted the easiest to use.
Another experiment conducted on Amazon’s Mechanical Turk asked participants to imagine a choice between a feasible software gift (a simple, straightforward photo-editing program) and a complicated but more advanced photo-editing program. In a control group, gift-givers made the classic mistake of splurging for the second, more complicated program, while the recipients were considerably more likely to say they wanted the simpler, more useful software. But in this experiment, there was a clever twist. Half the group was told to “first consider their own preference for the item.” By focusing on themselves—and coming to terms with the fact that they wouldn't have appreciated a complicated, expensive software program that they would have never figured out—they ironically came closer to giving the recipients what they wanted.
Still, we often buy gifts to be sentimental, and that's okay. The point of many gifts, such as jewelry or art, is precisely that they're not practical. Spending a lot of money on something that isn't merely useful is a way of saying: I like you enough to buy you stuff that simply says, "I like you."
At the same time, when we buy gifts that we hope the recipients will use, we tend to think too much about sentimentality than utility. After a while, many gifts are just things. And if they're not useful, or practical, or convenient, then what exactly makes them a great gift.
We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.
|
https://www.theatlantic.com/business/archive/2014/03/why-youre-bad-at-giving-gifts/284592/
|
CC-MAIN-2019-51
|
refinedweb
| 550
| 60.04
|
Javarebel style redeploymentStuart Douglas Apr 7, 2009 4:30 AM
I got sick or redeploying everything in my ear this morning and wrote a java rebel style hot deployment filter. Every request it looks for changed classes inside seam jars deployed in an ear and replaces the definitions with the new ones. This does let it replace the method bodies of EJBs (however you cannot add business methods).
Would people be interested in this? If so is would there be interest in seeing it included in seam?
1. Re: Javarebel style redeploymentStuart Douglas Apr 7, 2009 5:10 AM (in response to Stuart Douglas)
I have uploaded it here: EJBTHREE-1096
2. Re: Javarebel style redeploymentJason Long Apr 7, 2009 4:26 PM (in response to Stuart Douglas)
I will definitely try this out. Thanks.
I have tried connecting the debugger to my application server in eclipse, but this is not that stable.
Does this method work better that that?
Also, everyone please vote for EJBTHREE-1096
3. Re: Javarebel style redeploymentArbi Sookazian Apr 7, 2009 6:20 PM (in response to Stuart Douglas)
Me likes :)
I download the zip file. Is this compatible with Seam 2.0 and 2.1?
How about changing the signatures of the business (local and remote interface) methods?
Will it somehow notify if you added a business method and it didn't pick it up for the ant explode?
Good job on this bro! I voted for this JIRA a long time back...
what is the technical limitation that we cannot capture new business methods as well on ant explode with your solution?
4. Re: Javarebel style redeploymentArbi Sookazian Apr 7, 2009 6:24 PM (in response to Stuart Douglas)
and what about adding non-public methods to an EJB? (e.g. private void foo(), protected void bar()) will it pick that up?
I guess I should just try it out...
5. Re: Javarebel style redeploymentArbi Sookazian Apr 7, 2009 6:32 PM (in response to Stuart Douglas)
This is implemented as a seam filter...
So do we need to modify the web.xml for our apps or what? The only src file is this:
public class Agent { static Instrumentation inst; /** * Stores a reference to instrumentation * @param agentArgs * @param i */ public static void premain(String agentArgs, Instrumentation i) { if (!i.isRedefineClassesSupported()) { System.out.println("Class redfinition not supported"); } else { inst = i; } } /** * attempts to replace the specified classes while the jvm is running * @param da * @throws UnmodifiableClassException * @throws ClassNotFoundException */ public static void replaceClass(ClassDefinition[] da) throws UnmodifiableClassException, ClassNotFoundException { if (inst != null) { inst.redefineClasses(da); } else { System.out.println("Class redfinition not supported"); } } }
and it doesn't implement a Filter interface.
???
am I missing something here?
Also recommend using log4j rather than System.out.println...
6. Re: Javarebel style redeploymentStuart Douglas Apr 7, 2009 10:12 PM (in response to Stuart Douglas)
There is ClassRedefinitionFilter in the root of the zip, it is the filter and should go in your seam project. Ajent.java is part of hot-deploy.jar, the java agent. When you start jboss start it with the jvm option -javaagent:/path/to/hot-deplloy.jar.
This approach uses the java instrumentation api, so it is not as good as javarebel, in that it does not let you delete methods. It may have other limitations depending on what JVM you are using.
This is just a quick and dirty version that I knocked up, I am going to try and work on this over easter and try and remove some of the limitations.
7. Re: Javarebel style redeploymentAle Feltes Quenhan Apr 29, 2009 5:47 PM (in response to Stuart Douglas)
I was also having problem with EJBs and JavaRebel
The context variable was basically used for a different purpose, which is illegal (or you have the hot deployment classloader scenario described above).
Stuart's suggestion worked perfect for me.
There is only one typo in jar's name on your post (one extra
l).
I placed the ClassRedefinitionFilter.java on its corresponding package
com.brite.framework
and added the jvm option to my Jboss and hot deploy worked like a charm.
It also advices you (with a stacktrace) for the code that could not be replaced, wich is also very handy.
Thanks for sharing.
Ale
8. Re: Javarebel style redeploymentJevgeni Kabanov Nov 25, 2009 4:53 PM (in response to Stuart Douglas)
Hi guys, if you want to help get JRebel and Seam working together please sign off here:
JRebelAndSeamIntegration
|
https://developer.jboss.org/thread/187179
|
CC-MAIN-2018-39
|
refinedweb
| 748
| 64.41
|
The following example shows how you can disable keyboard navigation on the Flex Accordion container by extending the Accordion class and overriding the protected
keyDownHandler() method.
Full code after the jump.
<?xml version="1.0" encoding="utf-8"?> <!-- --> <mx:Application xmlns: <comps:MyAccordion <mx:VBox <mx:Label </mx:VBox> <mx:VBox <mx:Label </mx:VBox> <mx:VBox <mx:Label </mx:VBox> <mx:VBox <mx:Label </mx:VBox> <mx:VBox <mx:Label </mx:VBox> </comps:MyAccordion> </mx:Application>
/** * */ package comps { import mx.containers.Accordion; import flash.events.KeyboardEvent; public class MyAccordion extends Accordion { public function MyAccordion() { super(); } override protected function keyDownHandler(evt:KeyboardEvent):void { } } }
View source is enabled in the following example.
Due to popular demand, here is the “same” example in a more ActionScript friendly format:
<?xml version="1.0" encoding="utf-8"?> <!-- --> <mx:Application xmlns: <mx:Script> <![CDATA[ import mx.containers.VBox; import mx.controls.Label; import comps.*; private var accordion:MyAccordion; private var v1:VBox; private var v2:VBox; private var v3:VBox; private var v4:VBox; private var v5:VBox; private var l1:Label; private var l2:Label; private var l3:Label; private var l4:Label; private var l5:Label; private function init():void { l1 = new Label(); l1.text = "One"; l2 = new Label(); l2.text = "Two"; l3 = new Label(); l3.text = "Three"; l4 = new Label(); l4.text = "Four"; l5 = new Label(); l5.text = "Five"; v1 = new VBox(); v1.label = "One"; v1.percentWidth = 100; v1.percentHeight = 100; v1.addChild(l1); v2 = new VBox(); v2.label = "Two"; v2.percentWidth = 100; v2.percentHeight = 100; v2.addChild(l2); v3 = new VBox(); v3.label = "Three"; v3.percentWidth = 100; v3.percentHeight = 100; v3.addChild(l3); v4 = new VBox(); v4.label = "Four"; v4.percentWidth = 100; v4.percentHeight = 100; v4.addChild(l4); v5 = new VBox(); v5.label = "Five"; v5.percentWidth = 100; v5.percentHeight = 100; v5.addChild(l5); accordion = new MyAccordion(); accordion.percentWidth = 100; accordion.percentHeight = 100; accordion.addChild(v1); accordion.addChild(v2); accordion.addChild(v3); accordion.addChild(v4); accordion.addChild(v5); addChild(accordion); } ]]> </mx:Script> </mx:Application>
Hey Peter,
Just wanted to say thank you again! This worked out perfectly for what I needed it for.
Thanks,
- Nick
I made an accordion some time ago on kind of this line, I extended the accordion, however i didnt wanted to disable keyboard, just wanted to disable navigation to disabled headers, but still be able to use both mouse and keyboard to navigate.
I use a recursive function to achieve this, however it acts funny if u disable all headers (but why disable all headers anyways..)
I was looking for code to open myform with the accordion totally collapsed. In other words, all accordion pages are not open. The default is to have the first page opened.
I looked through Adobe forums, adobe class for accordion and around the internet.
I think I need something like:
my_acc.getChildAt(my_acc.numChildren -1);
I tried:
cfformitem type=”script”
function formOnLoad(){
var theChild_obj:Object = {};
var theChild_obj:Object = info.getChildAt(info.numChildren -1);
}
/cfformitem
But this did not work.
I am using CFusion MX 7 with cfform format=”Flash”
Any help would be greatly appreciated.
Thanks, Thomary
I used a work around. added a blank page and defaulted to that one.
Thanks for all these pages.
I found a lot of useful pages here.
I created a blog, i plan to post articles about flex and coding as soon as i have some free time
By now i have the code i posted above (without errors, since i didnt parsed it to be digested by the postin machine in here..) with a working example.
You can read it here
Regards
|
http://blog.flexexamples.com/2008/06/10/disabling-keyboard-navigation-on-the-accordion-container-in-flex/
|
crawl-002
|
refinedweb
| 599
| 54.29
|
MA bounce - user manual
MA bounce indicator
Basic idea
Moving average is widely used technical indicator, which in many cases acts as dynamic support or resistance. It is important to use correct period of moving average, but mostly used are periods 20 (sometimes 32) and 200. This is the basic idea of my MA bounce indicator, which indicates possible bounces from defined moving average. Of course, without filtering, there would be many false signals, because price sometimes tends to oscillate around moving average. This is why MA bounce uses combination of trend indicators and oscillators for filtering bounce signals.
MA bounce settings
MA bounce indicator can operate in 2 modes: Proximity sensor and Bounce indicator. Proximity sensor is just simple alert indicator, which gives you alert when price will be in defined range from Moving average.
Bounce indicator is more complex indicator which gives multiple BUY/SELL signals and additional filtering indicators as well. Now I will try to explain functionality of every Bounce indicator settings.
HTF MA is simply moving average from higher timeframe. You can set MA period, MA type, MA applied price and timeframe which will be used for calculation of HTF MA. If you use current TF for HTF MA, you will get classic moving average for your current chart.
HTF MA can be used for filtering or it can be used as another moving average which will act as dynamic support/resistance. One good rule is to enter BUY only when price is above HTF MA, or SELL when price is below HTF MA. Another indicator which can be used for filtering is HTF line. HTF line is color representation of trend from higher timeframe selected in Timeframe for HTF MA option.
HTF line is calculated from several trend indicators and oscillators for determining trend. Color change of HTF line can be sometimes good indicator for entering trade. Keep in mind that HTF line doesn't represent exactly all market conditions, because there is no indication of ranging market. For example, if all conditions for downtrend are met, HTF line will be red until all conditions for uptrend are met. Reason for this approach, is to eliminate frequent changing color of HTF line, which will result in many false signals. HTF line can work in 2 modes - long term and short term. Short term mode calculates market conditions from current timeframe and reacts faster to changes in trend. On the other hand, long term mode is calculated from higher timeframe, therefore it reacts much slower on changes in trend.
MA bounce arrows are main part of this indicator. They signalize when there is high probability that price bunce UP or DOWN from defined MA. They signalize bounce from HTF MA as well. Bounces from HTF MA are signals with pretty high probability of success. But I don't recommend using arrows without further filtering for entering the trade. I implemented 3 filtering tools which I will describe later, and I will show you some trading examples as well.
First great tool for filtering is indicator of supply and demand zones. User can choose timeframe which will be used for S&D zones calculation, colors for zones and if weak zones should be displayed as well.
Show weak zones option simply show every possible supply or demand zone (price touched zone at least 1 time). If you disable this option, indicator show only strong (price touched zone at least 2 times) and you will end up with more clear chart.
Good rule is to enter BUY when bounce arrow UP appears near demand zone, or enter SELL when bounce arrow DOWN appears near supply zone. When there is UP arrow near supply zone, you should skip this trade. Similar rule applies for DOWN arrow and demand zone. Supply & demand zones can be used for placing SL and TP as well.
Second tool is automatic FIBO levels indicator. FIBO levels are great tool for placing SL and TP. User can enter custom FIBO levels, select timeframe for FIBO calculation, FIBO depth (number of bars used for calculation) and color of FIBO levels.
FIBO levels are calculated from previous closed bar. For example, if you select D1 timeframe and FIBO depth set to 5, FIBO levels will be calculated from previous 5 days.
Last tool that can be used fol filtering is oscillator signal HUD. This tool is using combination of 3 oscillators with different settings. Based on user settings it will give information about current chart situation.
I will explain settings of this tool on stochastic oscillator. Oscillator upper and lower limits are basically limits for overbought and oversold areas. Depending on oscillator value and oscillator slope, oscilator HUD can give 7 differents outputs:
Strong SELL - oscillator value above upper limit, oscilator slope is negative and absolute value of slope >= slope threshold
Strong BUY - oscillator value below lower limit, oscilator slope is positive and absolute value of slope >= slope threshold
Weak SELL - oscillator value above upper limit, absolute value of slope < slope threshold
Weak BUY - oscillator value below lower limit, absolute value of slope < slope threshold
Trending DOWN - oscillator value between upper and lower limit, oscilator slope is negative
Trending UP - oscillator value between upper and lower limit , oscilator slope is positive
No clear Direction - other combinations
Trading examples
iCustom implementation
int start()
{
double ArrowUP = get_MAB(0);
double ArrowDOWN = get_MAB(1);
double HTFma = get_MAB(5);
double HTFlineUP = get_MAB(6);
double HTFlineDOWN = get_MAB(7);
//if(ArrowUP != 0) .... BUY...
return(0);
}
double get_MAB(int buff)
{
double mab_val = iCustom(Symbol(), PERIOD_CURRENT, "\\MA Bounce",
"---",
1, //Alert method 0 - Proximity sensor, 1 - Bounce indicator
20, //Current MA period
MODE_SMA, //Current MA type
PRICE_CLOSE, //Current MA applied price
2.0, //MA proximity range in pips
"---",
false, //Show HTF MA
200, //HTF MA period
MODE_SMA, //MA type
PRICE_CLOSE, //MA applied price
PERIOD_H1, //Timeframe for HTF MA
"---",
true, //Show arrows
true, //Show HTF line
10, //Filter period
5000, //Bars limit
"---",
3, //SR zones method 0 - Supply & Demand, 1 - FIBO, 2 - Both, 3 - None
"---",
false, //Show weak zones
60, //Supply & demand zones TF
clrPink, //Supply color
clrPaleTurquoise, //Demand color
"---",
"0, 23.6, 38.2, 50, 61.8, 80.9, 100, 161.8, 423.6", //FIBO levels
1440, //Fibo levels TF
1, //Fibo depth
clrMagenta, //Fibo levels color
"---",
false, //Use alerts
false, //Use push notifications
false, //Use email notifications
"---",
true, //Show info text
80.0, //Oscillator upper limit
20.0, //Oscillator lower limit
5, //Slope length
1.5, //Slope threshold
233, //Arrow style UP
234, //Arrow style DOWN
buff,
1);
return (mab_val);
}
|
https://www.mql5.com/en/blogs/post/741402
|
CC-MAIN-2021-04
|
refinedweb
| 1,085
| 52.6
|
In “Tutorial: DIY Kinetis SDK Project with Eclipse – Startup” I showed how to create a Kinetis SDK project from scratch. In this post it is about adding the board initialization files. With the board initialization the peripheral clocks and pin muxing is configured.
Clocks and Pin Muxing
As outlined in “Tutorial: DIY Kinetis SDK Project with Eclipse – Startup“, up to
main() I have already a basic configuration:
- Stack and Heap are defined
- ‘Critical’ hardware like Watchdog/COP and basic CPU clocks are configured
- C/C++ variables are initialized
- Any ANSI library runtime settings are done
But for many modern microcontroller, there is yet another hardware configuration needed, which usually is done from
main() as one of the first steps: to configure the clock gating and pin muxing.
‘Clock Gating‘ means that clocks are configured as such that the peripherals used are clocked. By default on ARM cores, peripheral clocks need to be enabled, and accessing a peripheral like I/O pins which are not clocked will very likely result in an exception or hard fault.
The other thing is ‘Pin Muxing’: on modern microcontroller an external pin on the processor package can be used by different peripherals. For example the picture below shows the ‘Processor’ view in Processor Expert. The pin No 7 can be used for I/O, UART, I2S, FTM or even for USB. And it is now configured and routed to PTE6 as I/O pin:
Pin Muxing or Routing can be in many cases changed at runtime, but it is important to configure it properly at the beginning before doing the driver initialization, as the muxin is ‘shared’ between the peripherals and drivers.
SDK Board Files
I’m adding my board to the project with the files in ${KSDK_PATH}\boards:
And I make sure the compiler knows the include path to the boards folder. But as the board files are using a bunch of other include files of the SDK, I need to add some extra compiler include paths:
"${KSDK_PATH}/boards/frdmk64f120m" "${KSDK_PATH}/platform/drivers/gpio" "${KSDK_PATH}/platform/hal/port" "${KSDK_PATH}/platform/hal/gpio" "${KSDK_PATH}/platform/drivers/clock" "${KSDK_PATH}/platform/utilities" "${KSDK_PATH}/platform/hal/sim"
With this, I’m able to compile and link the board files :-).
Initializing the Board
To initialize the board, I include the header file:
#include "board.h"
and call the initialization function:
hardware_init(); /* initialize the hardware */
My application does now ‘nothing’: it only calls main() and there I initialize the hardware:
But it will complain about missing
clock_manager_set_gate():
This means I need to add the needed files to configure clocks and the SIM (System Integration Module). So I add more files: clock and sim:
And again, I need to add an include path for the compiler:
"${KSDK_PATH}/platform/drivers/interrupt"
Unfortunately, there is a bug in the current SDK V1.0.0-Beta: fsl_sim_features.h reports that it does not know my correctly specified “CPU_MK64FN1M0VLL12” :-(:
Description Resource Path Location Type #error "No valid CPU defined" fsl_sim_features.h /FRDM-K64F_Bare_SDK/SDK/platform/hal/sim line 279 C/C++ Problem
So I need to add the following to my compiler preprocessor settings to make the compiler happy:
"CPU_MK64FN1M0VMD12"
And with this I can build:
text data bss dec hex filename 17912 2476 260 20648 50a8 FRDM-K64F_Bare_SDK.elf
newlib-nano
Well, that’s a lot of code size! Time to switch to the smaller newlib-nano library. So I add
-specs=nano.specs
to my linker settings:
With this, the code size gets cut by half :-):
text data bss dec hex filename 8376 152 44 8572 217c FRDM-K64F_Bare_SDK.elf
Now, that 8 KByte (yikes!) code is still a *lot*, given what our application does: it does *nothing* (yet). The reason is how the SDK is architected: it uses a lot of tables and internal data structures, and does a lot of things dynamically (e.g. hardware initialization). Therefore the code (and speed) overhead compared to a ‘traditional’ project is not small. Even ‘pure’ Processor Expert projects are much smaller for ‘doing nothing’ because Processor Expert can just generate the code needed, and does not need to carry on all the extra stuff. The same thing with a Processor Expert project would be less than 4 KByte code!
hardware_init()
But what is
hardware_init() doing? Not that much, but a very important thing: to setup the clocks and pin muxing:
void hardware_init(void) { int32_t i; /* enable clock for PORTs */ for (i = 0; i < HW_PORT_INSTANCE_COUNT; i++) { clock_manager_set_gate(kClockModulePORT, i, true); } /* init the general pinmux */ configure_enet_pin_mux(0); for (i = 0; i < HW_PORT_INSTANCE_COUNT; i++) { configure_gpio_pin_mux(i); } configure_i2c_pin_mux(0); configure_i2c_pin_mux(1); configure_sdhc_pin_mux(0); configure_spi_pin_mux(0); configure_uart_pin_mux(0); }
The first for loop enables the clock gates (passes the clocks to the peripherals) for the PORT (GPIO) peripherals. I recommend that you use the debugger and step through the code (e.g. go into
clock_manager_set_gate(). Then you can see what contributes to the code size).
Next it configures the ethernet pin muxing with
configure_enet_pin_mux(), followed by a for loop which does the same for the general purpose I/O pins. As we have two I2C on microcontroller, there are two calls to
configure_i2c_pin_mux().
Summary
Before I can run my application code, the microcontroller needs to be initialized properly. Basically this means configuring the pin muxing and clock gates. As this is usually depending on the board or what is attached to the pins, this is named ‘board configuration’ too. The Kinetis SDK has preconfigured board configuration, e.g. for the FRDM-K64F. The configuration is done with the function
hardware_init() which needs to be called right after
main(). With the Kinetis SDK I can initialize my hardware and board in programmatic way and do my custom board configuration. However, with the way how the SDK is architected, be ready for overhead. For microcontroller with a lot of FLASH like the K64F this does not matter much, but for smaller microcontroller saving every byte is important.
The project is available on GitHub. So what’s next? Blinking a LED!
Happy Boarding 🙂
With the transition from CW to KDS do you think we’ll have board configurations to earlier boards (like the FRDM-KL25Z, FRDM-K20D50M, etc.) or we’ll be stuck in two worlds having to choose the developing environment in function of the target board/microcontroller?
It is the Kinetis SDK which comes with board configuration files, and yes, they have been added for newer boards only for now. I have been told that KL25Z will be added to the KSDK too. But having board configuration files or not does not prevent me to use KDS for earlier boards at all. Such configuration files did not exist for CodeWarrior neither, so no difference for me. The point really is that the SDK itself is putting a sharp discontinuation into the world, and that because of this there are no Processor Expert LDD (or components) and more, only SDK ones. I have many projects for earlier boards which now I cannot move to the new parts (like FRDM-K22F) as with the SDK everything gots incompatible.
Hi Erich,
I think I’m being probably victim of nomenclature (or clashes of!):
I don’t know if it is possible to paste pictures in this comments section, so I’ll try to describe what I’m talking about “board configurations” in CW:
Open Processor Expert->Components Library and in the Categories tab see the first leaf (folder like icon) the “Board Support”, under it you’ll find a board configuration for the available chip (for example if you already opened a Processor Expert project for the MKL25Z128VLK4 CPU you’ll see a FRDM-KL25Z board available and so on.
In general for the CPUs available there seem to be more emphasis in the TWR-… type of boards and the Freedom board is the only one I know of.
Regards,
—
Cesar Rabak
Hi Cesar,
if you install additional Kinetis SDK update packages (they are inside the Kinetis SDK, tools support folder), then more TWR and FRDM boards get added.
|
https://mcuoneclipse.com/2014/06/22/tutorial-diy-kinetis-sdk-project-with-eclipse-board-configuration/
|
CC-MAIN-2020-10
|
refinedweb
| 1,335
| 58.01
|
Although we only expect Fuchsia to run on little endian (LE) CPU architectures, we still need to consider big endian (BE) issues. This doc explains:
A lot of peripheral hardware defines multi-byte BE values which must be converted.
Network byte order is BE. SCSI data structures are BE.
Even if Fuchsia never runs on a BE CPU (which it might someday, at least in theory), some of our code may be ported to a BE CPU.
Any time we define a multi-byte value, we create the possibility that another platform may want to write or read that value, and our code (which is open source) may be ported to that platform in order to do this.
Many modules do not need to do anything about endian issues; their data will only be interpreted by a single CPU running Fuchsia.
For those which might be ported to other OS's, or whose data might be exported by any channel:
Suggested style in C or C++ is to add
#include <endian.h> ... static_assert(__BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__);
either in every file, or accompanied by a comment explaining which files are not BE compatible.
(It's OK to not do anything, but better to make it explicit that the code is not BE compatible.)
In structures that are inherently endian, it's best to include macros that “convert” little-endian data to CPU endianness; this is a form of self-documenting code. Of course big-endian data should always use the macros.
Best style is to use the LE16 .. BE64 macros from endian.h, which should be available everywhere including DDK.
#include <endian.h> ... hw_le_struct.int_field = LE32(program_int); program_long = BE64(hw_be_struct.long_field);
To access multi-byte values in a byte buffer, use this crate. To convert integer values, use these methods.
|
https://fuchsia.googlesource.com/fuchsia/+/3a2c9b130f545121abbc96f99745c50c560282db/docs/development/source_code/endian.md
|
CC-MAIN-2021-49
|
refinedweb
| 298
| 64.1
|
Hi hppa people! I'm hoping you can help me fix a FTBFS that we're getting with Guile on hppa. The build log is here:;ver=1.8.5%2B1-2;arch=hppa;stamp=1217809852 The specific problem is a segmentation fault, at a point in a build that probably won't mean anything to non-Guile folks - but the key point is that we were recently seeing exactly the same segmentation fault (i.e. at the same place) on several other architectures (mips, mipsel, powerpc), and that was caused by the code in configure.in not detecting the stack direction properly. This patch -;a=commit;h=9143131b2766d1e29e05d61b5021395b4c93a6bc - fixed the problem for mips, mipsel and powerpc, but it looks as though we are still getting the stack direction wrong on hppa. (My understanding is that on hppa the stack actually grows upwards, whereas on most platforms it's downwards.) I've appended the relevant bit of configure.in below. Can anyone help with why this might not be working on hppa? Thanks, Neil #-------------------------------------------------------------------- # # Which way does the stack grow? # # Following code comes from Autoconf 2.61's internal _AC_LIBOBJ_ALLOCA # macro (/usr/share/autoconf/autoconf/functions.m4). Gnulib has # very similar code, so in future we could look at using that. # # An important detail is that the code involves find_stack_direction # calling _itself_ - which means that find_stack_direction (or at # least the second find_stack_direction() call) cannot be inlined. # If the code could be inlined, that might cause the test to give # an incorrect answer. #-------------------------------------------------------------------- SCM_I_GSC_STACK_GROWS_UP=0 AC_CACHE_CHECK([stack direction], [SCM_I_GSC_STACK_GROWS_UP], [AC_RUN_IFELSE([AC_LANG_SOURCE( [AC_INCLUDES_DEFAULT int find_stack_direction () { static char *addr = 0; auto char dummy; if (addr == 0) { addr = &dummy; return find_stack_direction (); } else return (&dummy > addr) ? 1 : -1; } int main () { return find_stack_direction () < 0; }])], [SCM_I_GSC_STACK_GROWS_UP=1], [], [AC_MSG_WARN(Guessing that stack grows down -- see scmconfig.h)])])
|
https://lists.debian.org/debian-hppa/2008/08/msg00003.html
|
CC-MAIN-2017-22
|
refinedweb
| 299
| 56.55
|
The stack is special region in memory, which operates on the principle lifo (Last Input, First Output).
We have 16 general-purpose registers for temporary data storage. They are RAX, RBX, RCX, RDX, RDI, RSI, RBP, RSP and R8-R15. It’s too few for serious applications. So we can store data in the stack. Yet another usage of stack is following: When we call a function, return address copied in stack. After end of function execution, address copied in commands counter (RIP) and application continue to executes from next place after function.
For example:
global _start section .text _start: mov rax, 1 call incRax cmp rax, 2 jne exit ;; ;; Do something ;; incRax: inc rax ret
Next arguments will be passed in stack. So if we have function like this:
int foo(int a1, int a2, int a3, int a4, int a5, int a6, int a7) { return (a1 + a2 - a3 - a4 + a5 - a6) * a7; }
Let’s look on one simple example:
global _start section .text _start: mov rax, 1 mov rdx, 2 push rax push rdx mov rax, [rsp + 8] ;; ;; Do something ;;
Here we can see that we put 1 to rax register and 2 to rdx register. After it we push to stack values of these registers. Stack works as LIFO (Last In First Out). So after this stack or our application will have following structure:
Then we copy value from stack which has address rsp + 8. It means we get address of top of stack, add 8 to it and copy data by this address to rax. After it rax value will be 1.
Example
Let’s see one example. We will write simple program, which will get two command line arguments. Will get sum of this arguments and print result.
section .data SYS_WRITE equ 1 STD_IN equ 1 SYS_EXIT equ 60 EXIT_CODE equ 0 NEW_LINE db 0xa WRONG_ARGC db "Must be two command line argument", 0xa
First of all we define
.data section with some values. Here we have four constants for linux syscalls, for sys_write, sys_exit and etc… And also we have two strings: First is just new line symbol and second is error message.
Let’s look on the
.text section, which consists from code of program:
section .text global _start _start: pop rcx cmp rcx, 3 jne argcError add rsp, 8 pop rsi call str_to_int mov r10, rax pop rsi call str_to_int mov r11, rax add r10, r11...
So we get command line arguments count and put it to rcx. After it we compare rcx with 3. And if they are not equal we jump to argcError label which just prints error message:
argcError: ;; sys_write syscall mov rax, 1 ;; file descritor, standard output mov rdi, 1 ;; message address mov rsi, WRONG_ARGC ;; length of message mov rdx, 34 ;; call write syscall syscall ;; exit from program jmp exit
Why we compare with 3 when we have two arguments. It’s simple. First argument is a program name, and all after it are command line arguments which we passed to program. Ok, if we passed two command line arguments we go next to 10 line. Here we shift rsp to 8 and thereby missing the first argument - the name of the program. Now rsp points to first command line argument which we passed. We get it with pop command and put it to rsi register and call function for converting it to integer. Next we read about
str_to_int implementation. After our function ends to work we have integer value in rax register and we save it in r10 register. After this we do the same operation but with r11. In the end we have two integer values in r10 and r11 registers, now we can get sum of it with add command. Now we must convert result to string and print it. Let’s see how to do it:
mov rax, r10 ;; number counter xor r12, r12 ;; convert to string jmp int_to_str.
str_to_int: xor rax, rax mov rcx, 10 next: cmp [rsi], byte 0 je return_str mov bl, [rsi] sub bl, 48 mul rcx add rax, rbx inc rsi jmp next return_str: ret ‘5’ ‘7’ ‘6’ ‘
After str_to_int we will have number in rax. Now let’s look at int_to_str:
int_to_str: mov rdx, 0 mov rbx, 10 div rbx add rdx, 48 add rdx, 0x0 push rdx inc r12 cmp rax, 0x0 jne int_to_str jmp print. After adding 48 we’ll get asci symbol of this number and all strings much be ended with 0x0. After this we save symbol to stack, increment r12 (it’s 0 at first iteration, we set it to 0 at the _start) and compare rax with 0, if it is 0 it means that we ended to convert integer to string. Algorithm step by step is following: For example we have number 23 implemented two useful function
int_to_str and
str_to_int for converting integer number to string and vice versa. Now we have sum of two integers which was converted into string and saved in the stack. We can print result:
print: ;;;; calculate number length mov rax, 1 mul r12 mov r12, 8 mul r12 mov rdx, rax ;;;; print sum mov rax, SYS_WRITE mov rdi, STD_IN mov rsi, rsp ;; call sys_write syscall jmp exit
We already know how to print string with
sys_write syscall, but here is one interesting part. We must to calculate length of string. If you will look on:
exit: mov rax, SYS_EXIT exit code mov rdi, EXIT_CODE syscall
That’s All.
|
https://0xax.github.io/asm_3/
|
CC-MAIN-2017-47
|
refinedweb
| 913
| 77.77
|
Get your existing app ready for HoloLens 2
Overview
This guide is tailored to help developers with an existing Unity application for HoloLens (1st gen) port their application for the HoloLens 2 device. There are four key steps to porting a HoloLens (1st gen) Unity application to HoloLens 2.
The sections below detail information for each stage:
Prerequisites:
It's highly recommended that you use source control to save a snapshot your applications original state before starting the porting process. Additionally, we recommend saving checkpoint states at various times during the process. It can also be helpful to have another Unity instance of the original application to compare side-by-side during the porting process.
Note
Before porting, ensure you have the latest tools installed for Windows Mixed Reality development. For most existing HoloLens developers, this involves updating to the latest version of Visual Studio 2019 and installing the appropriate Windows SDK. The content that follows dives further into different Unity versions and the Mixed Reality Toolkit (MRTK) Version 2.
For more information, please see Install the tools.
Migrate project to the latest version of Unity
If you're using MRTK v2, Unity 2019 LTS is the best long-term support path with no breaking changes in Unity or in MRTK. You should assess any plugin dependencies that currently exist in your project, and determine whether or not these DLLs can be built for ARM64. If a hard dependency plugin cannot be built for ARM64, you may need to continue building your app for ARM.
Update scene/project settings in Unity
After updating to Unity 2019 LTS, it's recommended that you update particular settings in Unity for optimal results on the device. These settings are outlined in detail under recommended settings for Unity.
It should be reiterated that the .NET scripting back-end is being deprecated in Unity 2018 and removed in Unity 2019. Developers should strongly consider switching their project to IL2CPP.
Note
IL2CPP scripting back-end can cause longer build times from Unity to Visual Studio, and developers should set up their developer machine for optimizing IL2CPP build times. It might also be beneficial to set up a cache server, especially for Unity projects with a large amount of assets (excluding script files) or constantly changing scenes and assets. When opening a project, Unity stores qualifying assets into an internal cache format on the developer machine. Items must be re-imported and re-processed when modified. This process can be done once and saved in a cache server and consequently shared with other developers to save time, as opposed to every developer processing the re-import of new changes locally.
After addressing any breaking changes from moving to the updated Unity version, you should build and test your current applications on HoloLens (1st gen). This is a good time to create and save a commit into source control.
Compile dependencies/plugins for ARM processor
HoloLens (1st gen) executes applications on an x86 processor while the HoloLens 2 uses an ARM processor. Therefore, existing HoloLens applications need to be ported over to support ARM. As noted earlier, Unity 2018 LTS supports compiling ARM32 apps while Unity 2019.x supports compiling ARM32 and ARM64 apps. Developing for ARM64 applications is preferred, as there is a material difference in performance. However, this requires all plugin dependencies to also be built for ARM64.
Review all DLL dependencies in your application. It is advisable to remove any dependency that is no longer needed. It is advised to save the application as a commit in your source control solution.
Important
Application's using MRTK v1 can be run on HoloLens 2 after changing the build target to ARM, assuming that all other requirements are met. This includes making sure you have ARM versions of all your plugins. However, your app won't have access to HoloLens 2 specific functions like articulated hand and eye tracking. MRTK v1 and MRTK v2 have different namespaces that allow both versions to be in the same project, which is useful for transitioning from one to the other.
Update to MRTK version 2
MRTK Version 2 is the new toolkit on top of Unity that supports both HoloLens (1st gen) and HoloLens 2. It is also where all the new HoloLens 2 capabilities have been added, such as hand interactions and eye tracking.
Check out the following resources for more information on using MRTK version 2: MRTK v2 involves input and interactions. most likely has many compiler-related errors. These are commonly due to the new namespace structure and new component names. Proceed to resolve these errors by modifying your scripts to the new namespaces and components.
For information on the specific API differences between HTK/MRTK and MRTK v2, use source control.
- Use default MRTK UX (buttons, slates, etc.), when possible.
- Refrain from modifying MRTK files directly; create wrappers around MRTK components.
- This action eases future MRTK ingestions and updates.
- Review and explore sample scenes provided in the MRTK, especially HandInteractionExamples.scene.
- Rebuild canvas-based UI with quads, colliders, and TextMeshPro text.
- Enable Depth Buffer Sharing or set focus point; prefer to use a 16-bit depth buffer for better performance. Ensure when rendering color, to also render depth. Unity generally does not write depth for transparent and text gameobjects.
- Set Single Pass Instanced Rendering Path.
- Utilize the HoloLens 2 configuration profile for MRTK
Testing your application
In MRTK Version 2, you can simulate hand interactions directly in Unity as well as develop with the new APIs for hand interactions and eye tracking. The HoloLens 2 device is required to create a satisfying user experience. You are encouraged to start studying the documentation and tools for greater understanding. MRTK v2 supports development on HoloLens (1st gen) and traditional input models, such as select via air-tap can be tested on HoloLens (1st gen).
Updating your interaction model for HoloLens 2
Once your application is ported and prepped for HoloLens 2, you're ready to consider updating your interaction leverage that have been designed and optimized for HoloLens 2.
Interaction model: Consider updating your interaction model. For most scenarios, we recommend switching from gaze and commit to hands. With your holograms typically being out of arms-reach, switching to hands results in far interaction pointing rays and grab gestures.
Hologram placement: After switching to a hands interaction model, consider moving some holograms closer to directly interact with them, by using near-interaction grab gestures with your hands. The types of holograms recommended to move closer to directly grab or interact are small target menus, controls, buttons, and smaller holograms that fit within the HoloLens 2 field of view when grabbing and inspecting the hologram.
Every application and scenario are different, and we’ll continue to refine and post design guidance based on feedback and continued learnings.
Additional caveats and learnings about moving applications from x86 to ARM
Straight-forward Unity applications are simple because you can build an ARM application bundle or deploy directly to the device for the bundle to run. Some Unity native plugins can present certain development challenges. Because of this, you must upgrade all Unity native plugins to Visual Studio 2019 and then rebuild for ARM.
One application used the Unity AudioKinetic Wwise plugin and that version of Unity did not have a UWP ARM plugin, which caused a considerable effort to rework sound capabilities into the application in question to run on ARM. Ensure that all required plugins for your development plans are installed and available in Unity.
In some cases, a UWP/ARM plugin might not exist for application-required plugins, which blocks the ability). Specifically, these guarantee that at least the specified number of bits will be used. On Intel/Nvidia GPUs, these are largely treated as 32 bits. On ARM, the number of bits specified is actually adhered to. That means in practice, these numbers might have less precision or range on HoloLens 2 than they did on HoloLens (1st gen).
The _asm instructions don’t appear to work on ARM, meaning any code using _asm instructions must be rewritten.
The SIMD instruction set is not supported on ARM because noticeable, depending on how many shaders need to be compiled. This has various implications for how shaders should be handled, packaged, updated differently on HoloLens 2 vs HoloLens (1st gen).
|
https://docs.microsoft.com/en-gb/windows/mixed-reality/mrtk-porting-guide
|
CC-MAIN-2020-34
|
refinedweb
| 1,390
| 53.61
|
Error Code: A12E1holly a brenton Aug 6, 2012 11:42 AM
What are the solutions to the problem? I cannot get Adobe Application Manager to load
1. Re: Error Code: A12E1meridian2002 Aug 7, 2012 11:57 AM (in response to holly a brenton)
Neither can I. Even AAM 6.2
2. Re: Error Code: A12E1m!ndl0rd
Sep 4, 2012 5:16 AM (in response to holly a brent.
3. Re: Error Code: A12E1RGalvan93 Jun 24, 2013 10:26 AM (in response to holly a brenton)
I just restarted my computer and resumed the download. Everything worked fine.
4. Re: Error Code: A12E1Sharkey134 Jun 25, 2013 3:06 AM (in response to m!ndl0rd)
Does this answer still apply? I've encountered it in the newer Creative Cloud app when trying to update it when it prompted me to install a new version. Half way through installing either pre- or post-reboot, it displays the error code and kills the process.
5. Re: Error Code: A12E1bhatnaga
Jun 25, 2013 3:11 AM (in response to Sharkey134)
Hi Sharkley,
Was the error code A12E1 or A12E5?
Regards,
Anirudh
6. Re: Error Code: A12E1Sharkey134 Jun 25, 2013 3:25 AM (in response to bhatnaga)!
7. Re: Error Code: A12E1Jarm Jun 25, 2013 3:57 AM (in response to bhatnaga)
Hi
I'm having this problem this morning too. I've rebooted and reinstalled twice still getting the A12E1 error.
bhatnaga if you could shed some light on this issue that would be super helpful.
Thanks
8. Re: Error Code: A12E1Sharkey134 Jun 25, 2013 4:00 AM (in response to Jarm).
9. Re: Error Code: A12E1bhatnaga
Jun 25, 2013 4:09 AM (in response to Jarm)1 person found this helpful
10. Re: Error Code: A12E1Jarm Jun 25, 2013 4:24 AM (in response to bhatnaga)
Ok I added Creative Cloud to the FireWall and 'allowd incoming connections'.
This has seamed to solve the issue of the error A12E1.
Thanks guys for your support.
bhatnaga
Sharkey134
11. Re: Error Code: A12E1candace6b Jun 26, 2013 8:28 AM (in response to holly a brenton)1 person found this helpful
My issue with this error code started after trying to update a successful instillation of creative cloud. During install, my screen went white and my computer froze, and I had to do a hard shut down. After restarting, I got a prompt saying Adobe Creative Cloud was having issues, but that the Desktop Manager was missing or damaged and needed to be reinstalled. The desktop application for Creative Cloud was now missing from my top bar (Mac OS 10.7.5) and also from the folder "Adobe Creative Cloud" in my Applications folder. Only the uninstaller remained. I tried clicking on the link in the prompt and downloading the Creative Cloud Installer, but about halfway through after asking for password permission for Adobe to make changes to my computer, I got the error code message "There was a problem with the installation (Error code: A12E1). For troubleshooting please go to the adobe support page."
I did follow some of the advice on these forums, including restarting my computer and retrying the installation, only ot have the same result.
I also tried renaming the OOBE folder in the Library as suggested in this thread, however that also did not work.
I also checked the Firewall issue that the user above mentioned, but found I have no firewall turned on on my machine, so this was not my issue.
I resulted to calling support. My support specialist walked through these same attempts before finding a solution that resolved the issue. He used this: To completely remove the Creative Cloud application (DESKTOP APPLICATION ONLY - HE DID NOT REMOVE OR "CLEAN UP" MY INSTALLED APPS SUCH AS PHOTOSHOP / INDESIGN, etc).
After "cleaning up" the desktop application, he was able to run the Creative Cloud Installer from the Downloads area you get from logging into creative cloud on your browser, and successfully install it on my computer. I would suggest doing this if you had this issue. Hope this is helpful and saves you the half an hour phone call!
12. Re: Error Code: A12E1David__B
Jul 16, 2013 8:43 AM (in response to candace6b)
Hi Candace6B,
Thanks for sharing your solution!
-Dave
13. Re: Error Code: A12E1WeberBob Jul 16, 2013 9:07 AM (in response to David__B)
Yeah it works great... until the next time you try to update... then you have to do it all over again.
It's getting more than a little annoying.
Please just FIX this problem!
Thanks,
~Bob
14. Re: Error Code: A12E1Arf thuis Jul 18, 2013 4:28 AM (in response to holly a brenton)
You might want to check the privileges setting on the folder 'Adobe' in HD/Library/Application Support/
Click the Adobe folder and choose Get Info… In our case the group was set to 'wheel' and this should be 'admin (read&write)'.
To correct: click the lock in the lower right and unlock, select the group named 'wheel', click the minus symbol. Click the plus symbol to add the group 'Administrators'.
Set the new group (called: admin) to read & write.
Don't forget to click the gear with triangle at the bottom and to choose: 'Apply to enclosed', or else the privileges settings will only be changed for the Adobe folder.
Arthur
15. Re: Error Code: A12E1WeberBob Jul 18, 2013 5:24 AM (in response to Arf thuis)
Can you tell me where to find: HD/Library/Application Support/
Thanks,
~Bob
16. Re: Error Code: A12E1David__B
Jul 18, 2013 12:59 PM (in response to WeberBob)
HI Bob,
An easy way to launch Finder, and then from the Menus along that top choose, Go > Computer and then just browse to the directory in the Finder window. Once you find the folder, highlight it and go File > Get Info to check the permissions
-Dave
17. Re: Error Code: A12E1blackmountain-agency Jul 27, 2013 4:20 AM (in response to holly a brenton)
Solution by Adobe support:
18. Re: Error Code: A12E1arman0440 Aug 12, 2013 2:32 PM (in response to m!ndl0rd)
In case of
Error Code: A12E1\
19. Re: Error Code: A12E1Wordyeti Aug 12, 2013 4:41 PM (in response to David__B)
I just got the dreaded A12E1 error code on my Mac Pro - my computer froze, and I had to do a hard reboot..
I am about to start the reboot/install/OOBE surgery process described above on the Mac. If successful, I will then try it on the PC.
20. Re: Error Code: A12E1Wordyeti Aug 12, 2013 5:13 PM (in response to arman0440)
Whew! The uninstall/rename/reinstall boogie seems to have worked on the Mac Pro. Now to try it on the PC.
21. Re: Error Code: A12E1Different-Strokes Aug 13, 2013 7:44 PM (in response to m!ndl0rd)
this worked..... thanks Man!!
22. Re: Error Code: A12E1trixiesirisheyes Aug 13, 2013 10:25 PM (in response to Wordyeti).
23. Re: Error Code: A12E1bhatnaga
Aug 14, 2013 4:20 AM (in response to trixiesirisheyes)
24. Re: Error Code: A12E1trixiesirisheyes Aug 14, 2013 7:36 AM (in response to bhatnaga)
User error. Thank you for checking.
Sent from my iPhone
25. Re: Error Code: A12E1Kelly McCathran Aug 21, 2013 12:58 PM (in response to David__B)
To get to Application Support in OS 10.7 (or newer):
- Make sure you are at the Finder
- Click the Go menu
- Choose Go to Folder (Command Shift G).
- Type in ~/Library
26. Re: Error Code: A12E1trixiesirisheyes Aug 21, 2013 1:13 PM (in response to Kelly McCathran).
27. Re: Error Code: A12E1Kelly McCathran Aug 21, 2013 1:37 PM (in response to Kelly McCathran)
Creative Cloud Update Error code: A121E | Full Fix
- Launch Activity Monitor (on the Mac, click Spotlight, the Magnifying glass in the upper right corner of your screen) then type Activity Monitor (press Return to launch)
- Look for AMUpdatesNotifier and click Quit Process
- Go back to Finder (your Operating System) click the Go menu (at the top, next to the View menu) and choose > Go to Folder
- Type ~/Library then press return (you are now in the Library folder)
- Inside the Library go to Application Support > Adobe and trash (delete) these folders (Command Delete will throw selected folders in the trash):
AAMUpdater
AAMupdateinventory
OOBE
- Next, locate the same folders at the root level of your Hard Drive by double-clicking on Macintosh HD and opening Library > Application Support > Adobe and trash the same folders:
AAMUpdater
AAMupdateinventory
OOBE
- Now, navigate to your Applications folder and open Utilities
- Delete the Adobe Creative Cloud folder and Adobe Application Manager
- Run the Creative Cloud cleaner tool: Tool.dmg
- Remove the Creative Cloud desktop app named:
Adobe Application Manager/Creative Cloud for Win XP, Vista & Max OSX 10.6
(you don't need to un-install any CC applications)
- Login to Creative Cloud through your web browser (Chrome, Firefox or Safari):
- Click Download Center at the top
- Scroll down to Creative Cloud (desktop access to Creative Cloud)
- Click Install
I hope this is a thorough fix. I've seen bits & pieces of this on different forum posts for the Error code: A121E but nothing this specific. Hope this helps...
My last questions to the VERY helpful Adobe support guy (I'm assuming):
Kelly McCathran: Question: (I removed everything 2 weeks ago with this Cleaner Tool) why is this problem happening again?
Sudhansu: Kelly, some times even after removing with the Cleaner tool, some raw files stay in the same location.
He did a thorough and excellent job today.
28. Re: Error Code: A12E1Cal Hero Aug 23, 2013 9:30 AM (in response to holly a brenton)
I have run into this error everytime I try to update CreativeCloud.... What works for me is to restart my computer and then force close the adobe update manager in the Activity Monitor (Applications > Utilities > Activity Monitor, select any process starting with "AAM" and click 'Quit Process' button in the menu bar).
I then open creative cloud, which requires me to update, and then I run the updater which completes without a problem.
I hope this works for you... Creative Cloud has been such a techinal headache, whereas the CS versions were trouble free for me since 2006!
29. Re: Error Code: A12E1Ger Ger Aug 27, 2013 12:40 PM (in response to holly a brenton)
This might help:
30. Re: Error Code: A12E1MKenney Design Sep 5, 2013 10:08 AM (in response to Kelly McCathran)
Kelly,
Thank you! I got the A121E error today while downloading the CC update. Your excellent -- and thorough -- fix worked for me!
Mike
31. Re: Error Code: A12E1jetfilm Sep 7, 2013 11:01 AM (in response to Kelly McCathran)
The renaming did the trick, thank you very much. I never would have known that in the first place. OOBE to OOBE_old (I'm on Mac.)
===========================================================
Off topic:
Why do I have to deal with this in the first place? Right -- I signed up for the cloud, instead of enjoying CS6 as before.
So far I love Adobe's apps since 20+ years (with some anger mixed into this otherwise positive compliment, when crashes happen ... of course ;o)
But so far, I wish I never had signed up for the cloud. I do not see that bugs get faster [if at all] fixed. I'm happy when the year is over, I will not extend. The cloud has slowed down my work, not much, it it has.
32. Re: Error Code: A12E1Wordyeti Sep 9, 2013 1:26 PM (in response to jetfilm)
Two months later, trying to update the CC tool ... and I get the same damn 12E1 error code. This time, I went through the whole rigmarole all over again. Did surgery to the Library and the root director. Rebooted. Tried to reinstall CC ... and got the white screen of death.
They really, really need to work the kinks out of this distribution model, 'cause what they're doing now is not a good customer experience.
33. Re: Error Code: A12E1xodiacdkk Sep 10, 2013 4:56 AM (in response to Wordyeti)
HAd to re-install my mbp with OS X 10.9 where Creative cloud has existed since the beginning of summer. Now I also get this 12E1 error. The only difference my my old install and the new one can be the format for the file system. I have formated with
Mac OS Extented (Case-sensitive, Journaled) which I can see that the Adobe Application Manager 7.0 does not support.
Could it be the same with the Creative Cloud client??
-mic
34. Re: Error Code: A12E1PixelPump Sep 13, 2013 1:32 PM (in response to Kelly McCathran)
Tried your full list of instructions after getting the A121E issue for about the 6th time. Thank you by the way:) Creative Cloud desktop seemed to install OK but now it doesn't seem to know what apps I have installed! It invites me to install the apps I already have installed:( Time to get your act together Adobe. The cc apps themselves are generally reliable. But this app for managing your new Creative Cloud thing is something you should be ashamed of. It is flakey, unreliable, complicated for us (your clients) to try and keep working, and this problem has now been going on for many months. SORT IT OUT!
35. Re: Error Code: A12E1candace6b Sep 25, 2013 5:28 AM (in response to Kelly McCathran)
Despite the fix I submitted back in June, I have continuously run into this error every time trying to update with Creative Cloud. This is pretty frustrating, as part of the reason my company went with this subsciption was with the purpose of being able to continuously keep all our programs up-to-date. This error really makes that a big pain, as you have to go through these steps to get it to work every time.
Kelly McCathran's solution worked for me this time. I am hoping it will last through the next update or two, at least.
I really hope Adobe's technical staff is spending a good amount of time trying to solve this issue because right now it's a big black mark on the new suite for me. I'd like to get the subscription at home for individual use, but this is the main reason I stop short, because I don't want to go through this problem at home AND at work...
Please put some attention on this, Adobe. Thank you.
36. Re: Error Code: A12E1Kelly McCathran Sep 26, 2013 1:47 PM (in response to candace6b)
I had this pop up again on a lab computer last week, un-installing and re-installing the Creative Cloud desktop app fixed it quickly. I pray it solves it permanaently. Happy to hear the full fix worked for you Candace6b.
37. Re: Error Code: A12E1MadMuva Oct 7, 2013 12:06 AM (in response to holly a brenton)
I think this is now the 7th time this has happened - everytime there's an update. Add to it that the accounts are hacked - come on adobe your software is VERY expensive sort this issue out. with an update!!!
38. Re: Error Code: A12E1sja1711 Oct 14, 2013 8:25 PM (in response to trixiesirisheyes)
So frustrated with Adobe products constantly requiring an update and their hunger for my limited regional internet access!!! I just can't afford to update.
39. Re: Error Code: A12E1rlevers Oct 31, 2013 9:14 AM (in response to m!ndl0rd)
On a Mac and have tried everything you mention, but I keep getting the A12E1 error code. Other thoughts?
|
https://forums.adobe.com/thread/1045283?tstart=0
|
CC-MAIN-2019-13
|
refinedweb
| 2,613
| 70.43
|
The intention of this blog is to talk about the similarities and differences between Python and Java. This will be a series of blogposts, this being the first. Through this blog I plan to help myself and other Python developers learn Java easily by relating it to similar concepts in Python. The focus of this post will be on learning Basic concepts and writing some simple code in Java.
Why learn Java?
- Java is a very popular and a widely used Programming language. It is used in building large systems due to it’s powerful JVM that leads to faster execution. When an Application needs to scale-up, Java is the most popular choice in the industry.
- It supports Concurrency.
- Java has a good cross platform support.
- It has a very strong IDE support making the developer’s life easier.
- It imposes certain best practices. There is most likely, a single way of doing stuff and hence a bad programmer can only harm so much.
But in case of languages like Python, there are several ways of writing a piece of code and the onus is completely on the developer to use the most optimized one.
Classes and Objects:
The concept of Classes, Objects and Object Oriented programming is similar to Python. But here’s the catch:
Each .java file can only have a single public class that contains the main function. This is to know the exact point in the file from which the application is supposed to be launched. The name of this public class should be same as the name of the .java file.
Variables:
Again, the concept is very similar to Python including the naming conventions.
Java cares about Type. An Elephant cannot be put in a basket meant for vegetables. This is very much unlike Python, which decides if we need a Basket or a Cage based on whether the source is an Elephant or a vegetable that we are dealing with. 🙂 Moreover,
- variables declared as final cannot be reassigned, once assigned to some value.
- A variable of a particular type can be assigned to another of the same type. Example, a variable of the type Tiger (Tiger t1 = new Tiger()) can be assigned to any another variable of the type Tiger. (Tiger t2 = new Tiger(); t2 = t1;)
- Java is PASS-BY-VALUE.
Global Variables in Python can be used as global x = 3. Java has no concept of global variables as such. However, a variable marked as public, static and final acts as a globally available constant.
Conditional Statement:
if(var1 == var2) { <do this> }
Iteration:
WHILE Loop:
Use while loop when we do not know the exact number of times to loop, in advance. In other words, when we have to loop until a condition is satisfied.
while(boolean condn) { <repeat this> }
FOR Loop:
for(int i=0;i<anArray.length;i++){ <repeat array-length number of times> }
Enhanced FOR loop: (For each loop)
String[] AnArray = {"John","Bob","Andy","George","Rachel"}; System.out.println("Names: "); for(String name: AnArray) { System.out.println(name); }
Java Library or Java API
Let’s now writing some simple code in Java to achieve certain redundant tasks using the built-in Libraries.
ArrayList:
ArrayList can perform some frequently used operations on Lists which otherwise takes multiple lines of code and iterations to achieve the same. Similar to the built-in functions for a List in Python, elements can be added to the list, removed, check if an element is present in a list, get the size of the list, get the index of an element etc. as follows:
import java.util.ArrayList;
ArrayList<String> ListPlaces = new ArrayList<String>(); String p1 = new String("Switzerland"); String p2 = new String("Singapore"); ListPlaces.add(p1); ListPlaces.add(0,p2); //0 - index where p1 should be added System.out.println(ListPlaces); System.out.println(ListPlaces.size()); System.out.println(ListPlaces.get(0)); System.out.println(ListPlaces.indexOf(p1)); System.out.println(ListPlaces.remove(1)); System.out.println(ListPlaces.contains(p2)); System.out.println(ListPlaces.isEmpty());
Generating a random number:
int Rand = (int) (Math.random() * 10);
*(int) -> Type casting the return value of the random() function to an integer.
Obtaining user input from command line:
import java.util.Scanner;
System.out.println("Please enter your name: "); Scanner sc = new Scanner(System.in); String username = sc.nextLine();
Let’s play Battleship
Here’s an example program, the battleship game written in Java. We will play this on command-line. The game randomly assigns 3 Battleships on a 7×7 board. It allows the user to guess the location. On successful identification of each ship, it says “kill”. When all the ships sink, games ends with a score.
There may be better ways of writing this code but at this stage, we are just trying to get a step closer to learning Java. Here’s an implementation of the battleship game.
This code is also available on Github:
import java.util.*; import java.io.*; public class Battleship { private GameHelper helper = new GameHelper(); private ArrayList<PlayBattleship> battleList = new ArrayList<PlayBattleship>(); private int noOfGuesses = 0; private void setUpGame() { PlayBattleship ship1 = new PlayBattleship(); ship1.setName("Maximus"); PlayBattleship ship2 = new PlayBattleship(); ship2.setName("Aloy"); PlayBattleship ship3 = new PlayBattleship(); ship3.setName("Agnes"); battleList.add(ship1); battleList.add(ship2); battleList.add(ship3); System.out.println(battleList); System.out.println("Your goal is to sink the three ships: Maximus, Aloy and Agnes with the least number of guesses."); for (PlayBattleship shipToset : battleList) { ArrayList<String> newLocation = helper.placeShip(3); shipToset.setLocation(newLocation); } } private void startPlaying() { while(!battleList.isEmpty()) { String userGuess = helper.getUserInput("Enter a guess"); checkUserGuess(userGuess); } finishGame(); } private void checkUserGuess(String userGuess) { noOfGuesses++; String result = "miss"; for(int i=0;i<battleList.size();i++) { result = battleList.get(i).checkTheGuess(userGuess); if(result.equals("hit")) { break; } if(result.equals("kill")) { battleList.remove(i); break; } } System.out.println(result); } void finishGame() { System.out.println("Wow!!! You sunk all the ships! Congrats!"); if(noOfGuesses <= 18) { System.out.println("Score: Congrats, you sunk all ships in " + noOfGuesses + " guesses!"); } else { System.out.println("You have exhausted your options! Sorry! Please try again!"); } } public static void main(String[] args) { Battleship game = new Battleship(); game.setUpGame(); game.startPlaying(); } }
class PlayBattleship { private ArrayList<String> location; private String name; public void setLocation(ArrayList<String> battleList) { location = battleList; } public void setName(String n){ name = n; } public String checkTheGuess(String choice) { String result = "miss"; int index = location.indexOf(choice); if(index>=0) { location.remove(index); if (location.isEmpty()) { result = "kill"; System.out.println("You sunk " + name + ":|"); } else { result = "hit"; } } return result; } }
class GameHelper { private static final String alphabet = "abcdefg"; private int gridlength = 7; private int gridSize = 49; private int[] grid = new int[gridSize]; private int comcount = 0; public String getUserInput(String prompt) { String inputLine = null; System.out.println(prompt); try { BufferedReader input = new BufferedReader(new InputStreamReader(System.in)); inputLine = input.readLine(); if(inputLine.length() == 0) return null; } catch(IOException e){ System.out.println("IOException: " + e); } return inputLine.toLowerCase(); } public ArrayList<String> placeShip(int size) { ArrayList<String> cells = new ArrayList<String>(); String temp = null; int [] curr = new int[size]; int attempts = 0; boolean success = false; int location = 0; comcount++; int incr = 1; if((comcount % 2) == 0) { incr = gridlength; } while(!success & attempts++ < 200) { location = (int) (Math.random() * gridSize); int x = 0; success = true; while(success && x<size) { if(grid[location] == 0) { curr[x++] = location; location += incr; if (location >= gridSize) { success = false; } if (x>0 && (location % gridlength == 0)) { success = false; } } else { success = false; } } } int x = 0; int row = 0; int column = 0; while (x < size) { grid[curr[x]] = 1; row = (int) (curr[x]/gridlength); column = curr[x] % gridlength; temp = String.valueOf(alphabet.charAt(column)); cells.add(temp.concat(Integer.toString(row))); x++; // System.out.print(" co-ord "+x+" = " + cells.get(x-1).toUpperCase()); } return cells; } }
Conclusion:
In this blogpost we learnt how to use Variables, write Classes, conditional statements, loops in Java. We also glanced at some basics APIs or built-in libraries in Java that help perform certain redundant operations easily. We also put these concepts in action by coding a game.
One comment
[…] The previous blog was about getting started with basic concepts like using Variables, write Classes, conditional statements, loops and built-in libraries. Java for Python Developers 1 – Basics […]
|
https://kriyavikalpa.com/2019/01/18/java-for-python-developers-1-basics/
|
CC-MAIN-2020-10
|
refinedweb
| 1,365
| 50.43
|
27 May 2008 10:40 [Source: ICIS news]
SINGAPORE (ICIS news)--More Asia naphtha cracker operators may have to reduce operating rates due to record high costs and poor margins from downstream products, industry sources said on Tuesday.
?xml:namespace>
“With such narrow differentials, there is just no incentive to buy naphtha to produce ethylene,” an end-user said.
Some end-users in northeast ?xml:namespace>
Notional future values for ethylene are pegged at $1,285-1,315/tonne CFR (cost and freight) NE Asia (northeast
In the Asian naphtha markets, the second half of July contract was notionally pegged at $1,125.00-1,128.00/tonne CFR Japan and first half August at $1,122.50-1,125.50/tonne CFR Japan.
If margins continued to be this bad and our crackers were still running at reduced rates, then we would not need to buy any naphtha for the next month or so, he added.
We anticipate some inefficient naphtha crackers would be shut down in the near term, another end-user said.
High naphtha prices have kept end-users at bay, resulting in SK Energy in
Asian naphtha broke another new historical record high late on Monday, by trading at $1,125.00/tonne CFR J.
|
http://www.icis.com/Articles/2008/05/27/9126839/asia-cracker-ops-look-to-cut-output-on-high-costs.html
|
CC-MAIN-2013-20
|
refinedweb
| 209
| 61.46
|
what is $_TESTATUS_InvalidObjectType
Started by
sqa,
3 posts in this topic
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!
Register a new account
Already have an account? Sign in here.
Similar Content
- By van_renier
Could someone help me understand why I'm losing the object for certain _IE user-functions (those included with Au3 v3.3.10.2)
#include <IE.au3> ; Required for automatically entering login credentials to app manager GLOBAL $oIE $oIE = _IECreate("about:blank", 1) ; _IEQuit ($oIE) ; _IEQuit line HERE, it work, closing the browser... ConsoleWrite ("===============================" & @CRLF ) If WinExists ( "Blank Page - Windows Internet Explorer") <> 1 then While WinExists ("Blank Page - Windows Internet Explorer") <> 1 sleep ( 500 ) WEnd EndIf sleep ( 1200 ) ConsoleWrite ("===============================" & @CRLF ) ; _IEQuit ($oIE) ; _IEQuit line HERE, it work, closing the browser... ConsoleWrite ('This is where the object, $oIE, gets broken/lost' & @CRLF ) _IENavigate( $oIE, "" ) sleep (3000) ConsoleWrite ("===============================" & @CRLF ) sleep ( 5000 ) _IEQuit ($oIE) ; HERE FAILS, console error: "--> IE.au3 T3.0-1 Error from function _IEQuit, $_IEStatus_InvalidObjectType" exit In the above script, there are 3 lines with _IEQuit. The 1st 2 are commented out, but they work with closing the browser fine, but the 3rd line, fails to close out the browser window.
(I'm not wanting to close out the browser window, but I was trying to figure out why subsequent calls lose the object variable reference of $oIE. Using _IEQuit seemed to be the easiest way to ensure we were attached to the same browser window.
I've also noticed that using the above script, after the script exits (with the 3rd _IEQuit line intact), since the browser window is still open, if I try and enter any URL into the adress bar, pressing enter then causing a new browser window to open up.
Any suggestions on why the object reference is getting broken?
Thanks,
Van
- By 6105
Hi,
Can't resolve a problem with killing needed attached tabs, always is killed 3 from 5.
Here is 6 links, 5 need to be killed:
#include <IE.au3> #include <String.au3> $oIE = _IECreate('facebook.com') _) sleep(5000) Dim $aIE[1] Dim $aRE[1] $aIE[0] = 0 $aRE[0] = 0 $i = 1 While 1 $oIE = _IEAttach ("", "instance", $i) If @error = $_IEStatus_NoMatch Then ExitLoop ReDim $aIE[$i + 1] ReDim $aRE[$i + 1] $aIE[$i] = $oIE $get = _IEPropertyGet($oIE, "locationurl") $aRE[$i] = $get $aIE[0] = $i $check = _StringBetween($get, "m.", ".com") If not @error then $oIE = _IEAttach ("", "instance", $i) ConsoleWrite('Get = '&$get&@CRLF) _IEQuit($oIE) sleep(200) EndIf $i += 1 WEnd MsgBox(0, "Browsers Found", "Number of browser instances in the array: " & $aIE[0])
Thank you in advance.
Tedy.
|
https://www.autoitscript.com/forum/topic/184590-what-is-_testatus_invalidobjecttype/
|
CC-MAIN-2017-47
|
refinedweb
| 446
| 67.69
|
Post your Comment
Command line arguments
Command line arguments Please explain me what is the use of "command line arguments" and why we need to use
Line Breaks in PHP
Line Breaks in PHP Hi,
I am beginner in PHP language. How to do the line-break in PHP? Does somebody help me in this regard.
Thanks
divide the line in to two
divide the line in to two In| this |example |we| are| creating |an |string| object| .We| initialize| this| string |object| as| "Rajesh Kumar"|.| We| are| taking| sub| string |
In above mentioned line based on the tag i want
sorting Line in c
sorting Line in c Hi I need help with sorting line in programming c
asking user to enter 1-10 characters and then sort them in lower case, alphabetical and ascending order. Any help will be appreciated! please help!
void
How to read big file line by line in java?
Learn how to write a program in java for reading big text file line by line
In this tutorial I will explain you how you can read big file line by line... is very useful in reading big file line by line in
Java.
In this tutorial we
Line Drawing - Swing AWT
Line Drawing How to Draw Line using Java Swings in Graph chart,by giving x & Y axis values Hi friend,
i am sending code of draw line...) {
System.out.println("Line draw example using java Swing");
JFrame frame = new
Line Number Reader Example
Line Number Reader Example
...; readLine()
method is used to read the data line by line. LineNumberReader class
is provide getLineNumber() method to get the number of line. We
COMMAND LINE ARGUMENTS
COMMAND LINE ARGUMENTS JAVA PROGRAM TO ACCEPT 5 COMMAND LINE ARGUMENTS AND FIND THE SUM OF THAT FIVE.ALSO FIND OUT LARGEST AMONG THAT 5
Hi Friend,
Try the following code:
import java.util.*;
class
Compiling package in command line
Compiling package in command line Hi friends,
i am totally new to java programming, i am basic learner in java.
My query is, How to compile java package using command line.
For Eg: When i compile following command,
c:>set
How to read a large text file line by line in java?
How to read a large text file line by line in java? I have been... memory assigned is also a limit.
So, we have decided to read the text file line by line and the extract the data line by line. Reading the file line by line
line number in a file of entered string
line number in a file of entered string when i entered string from console , it should show line number in a file of that string
Post your Comment
|
http://www.roseindia.net/discussion/20891-End-of-line.html
|
CC-MAIN-2013-20
|
refinedweb
| 456
| 69.52
|
Tabpy, SQL, and the Knapsack Problem - Combination Optimization inside TableauGabe DeWitt Sep 19, 2018 4:32 PM
Here's some neat things I've pieced together that others might find helpful.
You'll need to have set up Tabpy for these examples.
Querying MS SQL Server with Tabpy
I don't know much SQL, but have managed to figure a few things out.
A neat ability within Tableau that I just recently became aware of is Performance Recording.
Help > Settings and Performance > Start Performance Recording
- Turn it on, do a few things that will pull the data from your SQL database, and turn it back off. Tableau then outputs a Dashboard of the results.
- Put the Slider for "Show Events..." to '0'. This will then display all the queries Tableau made.
- Select the Query of choice from the Events panel, and it's associated query will display in the Query panel.
- Copy that Query.
What's really especially great about this is that the query includes all of the complex If statements and logic that I used to create the calculated values in Tableau, and now i have them in a pure SQL format.
From here I put that code into a python editor of choice. I mostly use Jupytar Notebook and Spyder.
Here's the script I use to pull MS SQL Server data via python
import pandas as pd import pyodbc server = 'SERVER NAME' db = 'DATABASE NAME' # Create the connection conn = pyodbc.connect('DRIVER={SQL Server};SERVER=' + server + ';DATABASE=' + db + ';Trusted_Connection=yes') # query db sql = ''' SELECT ALL THAT STUFF FROM PERFORMANCE RECORDER ''' df = pd.read_sql(sql, conn)
Load the data into pandas, and viola, I have a dataframe of the same fields I calculated in Tableau to do some more advanced analysis stuff with in python.
Aside -
The initial reason I went down this path with Tabpy was for an Asset Management Dashboard I'm building. I wanted to be able to answer the question:
"if I have a $5M dollar budget, what equipment should I spend it on?"
I created a Priority Index based on the bucketed age of equipment past its replacement date, paired with an internal measure of priority (a value recorded in the database when the equipment was originally added). I used a parameterized modifier on both age priority and internal priority, allowing the combined Priority Index to vary based on a user's configurations of what he/she denotes as a higher weighting factor toward replacement priority.
This worked well enough in Tableau, using a running sum to determine which and how many pieces of equipment to replace...but it would fall short of utilizing the whole budget, depending on the priory parameter modifier selections. Sometimes it would leave a lot left unspent if the next item on the index was a large value item (eg $1M), sending it over the running sum cap.
So, I needed another way that could handle this with a more programmatic solution.
I found some easy enough to understand Python that handles the Knapsack problem.
Knapsack (Combination Optimization) with Tabpy
For those not familiar with this problem, here are some wiki bits on the subject:.
As with many useful but computationally complex algorithms, there has been substantial research on creating and analyzing algorithms that approximate a solution. The knapsack problem, though NP-Hard, is one of a collection of algorithms that can still be approximated to any specified degree. This means that the problem has a polynomial time approximation scheme. To be exact, the knapsack problem has a fully polynomial time approximation scheme (FPTAS)
After getting my code to work in python I had to make it work in Tableau...which was a bit tricky.
Rather than get too long winded with explanations, here's the python code as it would appear in a calculated field in Tableau for a Boolean Result...
I also made another version that returned the asset ID string, but found a simple Boolean response worked better (I included a note toward this in the below script).
SCRIPT_BOOL(" import pandas as pd import pyodbc server = 'ServerName' db = 'databaseName' # Create the connection conn = pyodbc.connect('DRIVER={SQL Server};SERVER=' + server + ';DATABASE=' + db + ';Trusted_Connection=yes') # query db sql = ''' SELECT pasted stuff from Performance Recording Query ''' df = pd.read_sql(sql, conn) #reorder dataframe columns df= df[['AssetID','Cost','ReplacementLag', 'ReplacementPriority']] print df # mutiply by Tableau Parameters and then combine priority weighted values into one df['ReplacementWGT'] = (df['ReplacementLag']*max(_arg3)) + (df['ReplacementPriority']*max(_arg4)) print df #reorder columns df= df[['AssetID','Cost', 'ReplacementWGT']] # fill NaN values with 0 df['Cost']=df['Cost'].fillna(0) # Turn AssetID into String df['AssetID'] = df['AssetID'].apply(str) # Turn to Integers df['ReplacementWGT'] = df['ReplacementWGT'].astype(int) df['Cost'] = df['Cost'].astype(int) # Sort dateframe by ReplacementWGT df=df.sort_values(by='ReplacementWGT', ascending=False) print df # Limit for Testing ''' limit df for testing ''' df=df[:25] print df # turn df into tuples subset = df itemsList = [tuple(x) for x in subset.values] # Turns tuple list into tuple of tuples items = tuple(itemsList) print items # adding max() to budget parameter allowed this to work BUDGET= max(_arg2) # This is the start of the Knapsack script try: xrange except: xrange = range def totalvalue(comb): ' Totalise a particular combination of items' totwt = totval = 0 for item, wt, val in comb: totwt += wt totval += val return (totval, -totwt) if totwt <= BUDGET else (0, 0) # Second part of the Knapsack scripts def knapsack01_dp(items, limit): table = [[0 for w in range(limit + 1)] for j in xrange(len(items) + 1)] for j in xrange(1, len(items) + 1): item, wt, val = items[j-1] for w in xrange(1, limit + 1): if wt > w: table[j][w] = table[j-1][w] else: table[j][w] = max(table[j-1][w], table[j-1][w-wt] + val) result = [] w = limit for j in range(len(items), 0, -1): was_added = table[j][w] != table[j-1][w] if was_added: item, wt, val = items[j-1] result.append(items[j-1]) w -= wt return result # Output of the Knapsack Script bagged = knapsack01_dp(items, BUDGET) print bagged # get list of solutions from Knapsack lst1 = [i[0] for i in bagged] print lst1 # get the list of Assets from Tableau Data lst2 = [i for i in _arg1] print lst2 # Look for Knapsack solutions inside of Assets list lst3= [] for i in lst2 : if i in lst1: lst3.append(True) else: lst3.append(False) ''' #if you want to return Real value instead of Boolean. You'll also need to modifying the initial entry Script from SCRIPT_BOOL to SCRIPT_REAL for i in lst2 : if i in lst1: lst3.append(i) else: lst3.append(i*0) ''' return lst3 ",ATTR([Asset Id (str)]), [Budget], [Replacement Lag Weight],[Replacement Priority Weight] )
It took me a bit to figure out how Tabpy took values in, and how it liked to return them, but I'm feeling a little better now being able to get things to function how I initially imagined it.
One of my main goals was to see if I could get Tableau Parameters to affect the python script.
The trick to it was adding max() to the parameter input arguments.
The only downfall to this solution is the solving time it takes for Tableau to return the results. Combination Optimization isn't a quick solution.
Also, when you change anything involving the Tabpy queries it runs the full python script again.
As I increase the complexity of the problem (higher budget, more assets) it can take serval minutes to run and return Tableau visualized results.
For UX, this is not a very good solution (from an executive user pov).
After getting this all to work in Tableau, I feel there are some more efficient ways to return these same python based results as maybe endpoints, or resting lists in MS SQL Server (e.g. perhaps generated each morning via some python for multiple configurations of the Asset Replacement Dashboard parameters). I also plan to see what other python solutions for combination optimization exist, as to compare the resulting lists of selected assets (see which is doing a better job). I've read some things about the Numberjack python library that sounds promising... I'll try and play with it soon.
I hope this helps a few people connect some dots with SQL, Tableau, and Python.
Cheers,
Gabe
1. Re: Tabpy, SQL, and the Knapsack Problem - Combination Optimization inside TableauCiara Brennan
Sep 27, 2018 7:03 AM (in response to Gabe DeWitt)
Good stuff Gabe, thanks for sharing
Ciara
[Program Manager - Tableau Community]
2. Re: Tabpy, SQL, and the Knapsack Problem - Combination Optimization inside TableauGabe DeWitt Oct 31, 2018 6:22 AM (in response to Ciara Brennan)
Thanks Clara, much obliged.
|
https://community.tableau.com/thread/282443
|
CC-MAIN-2019-51
|
refinedweb
| 1,457
| 59.23
|
Background
I. Game Objects (current)
II. Interactions
III. Serialization
This article is the first in a series of articles about the practical application of the entity-component pattern to game programming. This article, in particular, describes what an entity and a component are, how they interact with eachother, and what the advantages of the entity-component pattern are over some traditional methods of creating gameobjects. Also, it will describe the Entities-Parts framework which is simply a collection of reusable classes and concepts for entity-component interaction.
Since I started implementing the entity-component model several years ago, it has evolved based off of experience using it in my games and studying various articles and other implementations. It has not been tested in large scale games such as RPGs, so by no means is it perfect. The implementation and the articles will keep evolving as my knowledge increases and I develop more games.
As part of the framework, I will also provide an implementation of the entity and component and an example rpg-battle system showing interactions between entities and components. The example will be built on in the next articles. The implementation is provided in Java and C++ w/boost.
It is generic enough that you can reuse your components in multiple games and decouple your game objects, i.e. entities, from high-level game logic and systems. I prefer to use the word 'part' instead of 'component' because it is shorter.
The Problem
Some approaches to creating game objects require you to cram all functionality into one GameObject class, or create several game object classes that have duplicate code. With the entity-component pattern, you can reuse code and make your game objects dynamic by thinking of them as a bunch of interconnected parts.
Deep Inheritance Hierarchy Approach: Let's say you have a Monster class. The class contains a few variables, such as those for health, damage, and position. If you want a new type of Monster that flies, you derive a FlyingMonster class from the Monster class. If you want a spellcasting Monster, you derive a SpellMonster class from the Monster class. The problem arises when you want a spellcasting flying monster. You now need to decide whether you want to derive SpellFlyingMonster from FlyingMonster or SpellMonster. Furthermore, you will need to copy and paste code from the FlyingMonster or SpellMonster class to provide the functionality that the SpellFlyingMonster is missing from its parent class.
Monolithic Class Approach: Another approach is creating a single class that represents and contains all the functionality of your game objects, i.e. the GameObject. While this solves the issue of having duplicate code that exists in deep inheritance hierarchies, this class can quickly get out of hand as it becomes responsible for flying, spells, health, position, etc. Maintaining it will be near impossible and gameobjects will be required to hold data for functionality they don't need. For example, a monster that only flies will still need to keep a collection of spells.
Solution - Entity-Component Approach: With the entity-component pattern, you think of a monster as an entity made up of several parts. You do not need separate classes for Monster, FlyingMonster, SpellMonster, and SpellFlyingMonster. For example, to create a FlyingMonster, you create an entity and add a health part, a damage part, and a flying part. If later on, you want it to cast spells, you add the spell part with one line of code. You can create dozens of monster types by mixing and matching parts.
Explaining the Concept
Note: The approach I present is fundamentally different to ECS (Entity Component Systems). In ECS, components only store the data, but do not provide the functionality. Instead, the functionality is provided by external systems. See.
It is natural to categorize gameobjects into monsters, items, and walls. Notice that these objects have similar attributes and functionality. They all need to be drawn, hold a physical space in the world, and collide with the player in some way. The line between different categories of gameobjects tend to blur as the gameobjects become more complex. For example, adding movement and damaging spikes to a wall, basically turns it into an enemy, e.g. Thwomp in Mario.
In the entity-component pattern, game objects are described as entities that contain several attributes and functionality. The components/parts provide an abstraction for the attributes and functionality. Parts can be functional, e.g. control the behavior the entity, or just hold attributes that other systems and parts can reference, e.g. a health stat. The entity is responsible for managing parts as well as their lifetimes, i.e. initializing, updating, and cleaning up parts. There are many benefits of this approach: code reusability, addition/removal of attributes in a game object during runtime, and ease in generating new game object types in complex games such as MMORPGs.
Example: Roleplaying Game (RPG) Battle
It's best to illustrate how the entity-part framework works using the following example game/simulation. The example project is attached to this article. The example is in Java, but the C++ code for the Entity and Part classes is also attached.
The Main class contains the logic to initialize and run our game. It creates a monster and a helpless villager. It then uses a basic game loop and updates the entities. While running the application, the state of the game will be printed to the console by the monster entity such as the monster's health, villager's health, and monster's height as the result from flying.
As you can see, it is very easy to create new types of monsters once you write the code for the parts. For example, we can create a nonflying monster by removing the line of code that attaches the flying part. The MonsterControllerPart is the AI for the monster entity and the target passed into its constructor is the entity that will be attacked. We can make a friendly monster by passing in an enemy Entity into the MonsterControllerPart constructor instead of the helpless villager.
public class Main { // main entry to the game application public static void main(String[] args) throws InterruptedException { Entity villager = createVillager(); Entity monster = createMonster(villager); // very basic game loop while (true) { villager.update(1); monster.update(1); Thread.sleep(1000); } } // factory method for creating a monster public static Entity createMonster(Entity target) { Entity monster = new Entity(); monster.attach(new StatsPart(100, 2)); // If we don't want our monster to fly, simply uncomment this line. monster.attach(new FlyingPart(20)); // If we don't want our monster to cast spells, simply uncomment this line. monster.attach(new SpellsPart(5)); monster.attach(new MonsterControllerPart(target)); monster.initialize(); return monster; } // factor method for creating an innocent villager public static Entity createVillager() { Entity villager = new Entity(); villager.attach(new StatsPart(50, 0)); villager.initialize(); return villager; } }
MonsterControllerPart code, which serves as the AI and behavior for the monster, includes attacking its target, saying stuff, and attempting to use spells. All of your parts must derive from the Part class. Optionally, parts such as the MonsterControllerPart can override the initialize, cleanup, and update methods to provide additional functionality. These methods are called when its parent entity gets respectively initialized, cleaned up, or updated. Notice that parts can access other parts of its parent entity, e.g., entity.get(StatsPart.class).
public class MonsterControllerPart extends Part { private Entity target; public MonsterControllerPart(Entity target) { this.target = target; } @Override public void initialize() { System.out.println("I am alive!"); } @Override public void cleanup() { System.out.println("Nooo I am dead!"); } @Override public void update(float delta) { StatsPart myStatsPart = entity.get(StatsPart.class); // if target has stats part, damage him if (target.has(StatsPart.class)) { StatsPart targetStatsPart = target.get(StatsPart.class); target.get(StatsPart.class).setHealth(targetStatsPart.getHealth() - myStatsPart.getDamage()); System.out.println("Whomp! Target's health is " + targetStatsPart.getHealth()); } // if i have spells, heal myself using my spells if (entity.has(SpellsPart.class)) { entity.get(SpellsPart.class).castHeal(); System.out.println("Healed myself! Now my health is " + myStatsPart.getHealth()); } } }
General-purpose StatsPart keeps track of important RPG stats such as health and damage. This is used by both the Monster and the Villager entity.
public class StatsPart extends Part { private float health; private float damage; public StatsPart(float health, float damage) { this.health = health; this.damage = damage; } public float getHealth() { return health; } public void setHealth(float health) { this.health = health; } public float getDamage() { return damage; } }
SpellsPart class gives the Monster a healing spell to cast.
public class SpellsPart extends Part { private float healRate; public SpellsPart(float healAmount) { this.healRate = healAmount; } public void castHeal() { StatsPart statsPart = entity.get(StatsPart.class); statsPart.setHealth(statsPart.getHealth() + healRate); } }
FlyingPart code allows the Monster to fly to new heights.
public class FlyingPart extends Part { private float speed; // in more sophisticated games, the height could be used to tell if an entity can be attacked by a grounded opponent. private float height = 0; public FlyingPart(float speed) { this.speed = speed; } @Override public void update(float delta) { height += speed * delta; System.out.println("Goin up! Current height is " + height); } }
The Entity-Part Code
The following code blocks are for the Entity class and the Part class. These two classes are the base classes you need for the entity-part framework.
Entity class:
/** * Made up of parts that provide functionality and state for the entity. * There can only be one of each part type attached. * @author David Chen * */ public class Entity { private boolean isInitialized = false; private boolean isActive = false; private Map<Class<? extends Part>, Part> parts = new HashMap<Class<? extends Part>, Part>(); private List<Part> partsToAdd = new ArrayList<Part>(); private List<Class<? extends Part>> partsToRemove = new ArrayList<Class<? extends Part>>(); /** * @return If the entity will be updated. */ public boolean isActive() { return isActive; } /** * Sets the entity to be active or inactive. * @param isActive True to make the entity active. False to make it inactive. */ public void setActive(boolean isActive) { this.isActive = isActive; } /** * @param partClass The class of the part to check. * @return If there is a part of type T attached to the entity. */ public <T extends Part> boolean has(Class<T> partClass) { return parts.containsKey(partClass); } /** * @param partClass The class of the part to get. * @return The part attached to the entity of type T. * @throws IllegalArgumentException If there is no part of type T attached to the entity. */ @SuppressWarnings("unchecked") public <T extends Part> T get(Class<T> partClass) { if (!has(partClass)) { throw new IllegalArgumentException("Part of type " + partClass.getName() + " could not be found."); } return (T)parts.get(partClass); } /** * Adds a part. * @param part The part. */ public void attach(Part part) { if (has(part.getClass())) { throw new IllegalArgumentException("Part of type " + part.getClass().getName() + " already exists."); } parts.put(part.getClass(), part); part.setEntity(this); if (isInitialized) { part.initialize(); } } /** * If a part of the same type already exists, removes the existing part. Adds the passed in part. * @param part The part. */ public void replace(Part part) { if (has(part.getClass())) { detach(part.getClass()); } if (isInitialized) { partsToAdd.add(part); } else { attach(part); } } /** * Removes a part of type T if it exists. * @param partClass The class of the part to remove. */ public <T extends Part> void detach(Class<T> partClass) { if (has(partClass) && !partsToRemove.contains(partClass)) { partsToRemove.add(partClass); } } /** * Makes the entity active. Initializes attached parts. */ public void initialize() { isInitialized = true; isActive = true; for (Part part : parts.values()) { part.initialize(); } } /** * Makes the entity inactive. Cleans up attached parts. */ public void cleanup() { isActive = false; for (Part part : parts.values()) { part.cleanup(); } } /** * Updates attached parts. Removes detached parts and adds newly attached parts. * @param delta Time passed since the last update. */ public void update(float delta) { for (Part part : parts.values()) { if (part.isActive()) { part.update(delta); } } while (!partsToRemove.isEmpty()) { remove(partsToRemove.remove(0)); } while (!partsToAdd.isEmpty()) { attach(partsToAdd.remove(0)); } } private <T extends Part> void remove(Class<T> partClass) { if (!has(partClass)) { throw new IllegalArgumentException("Part of type " + partClass.getName() + " could not be found."); } parts.get(partClass).cleanup(); parts.remove(partClass); } }
Part class:
/** * Provides partial functionality and state for an entity. * @author David Chen * */ public abstract class Part { private boolean isActive = true; protected Entity entity; /** * @return If the part will be updated. */ public final boolean isActive() { return isActive; } /** * @return The entity the part is attached to. */ public final Entity getEntity() { return entity; } /** * Sets the entity the part is attached to. * @param entity The entity. */ public final void setEntity(Entity entity) { this.entity = entity; } /** * Initialization logic. */ public void initialize() { } /** * Cleanup logic. */ public void cleanup() { } /** * Update logic. * @param delta Time since last update. */ public void update(float delta) { } }!
|
http://www.gamedev.net/page/resources/_/technical/game-programming/entities-parts-i-game-objects-r3596?st=0#comment_37621
|
CC-MAIN-2016-40
|
refinedweb
| 2,091
| 50.84
|
Before you can create a client application to interact with the calculator web service, you must first create a proxy class. Once again, you can do this by hand, but that would be hard work. The folks at Microsoft have provided a tool called wsdl that generates the source code for the proxy based on the information in the WSDL file.
To create the proxy, enter wsdl at the Windows command-line prompt, followed by the path to the WSDL contract. For example, you might enter:
wsdl
The result is the creation of a C# client file named Service1.cs, an excerpt of which appears in Example 15-6. You must add the namespace WSCalc, because you'll need it when you build your client (the tool does not insert it for you).
using System.Xml.Serialization; using System; using System.Web.Services.Protocols; using System.Web.Services; namespace WSCalc { [System.Web.Services.WebServiceBindingAttribute( Name="Service1Soap", Namespace="")] public class Service1 : System.Web.Services.Protocols.SoapHttpClientProtocol { public Service1( ) { this.Url = ""; } [System.Web.Services.Protocols.SoapDocumentMethodAttribute( "", RequestNamespace= "", ResponseNamespace= "", Use=System.Web.Services.Description.SoapBindingUse.Literal, ParameterStyle= System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] public System.Double Add(System.Double x, System.Double y) { object[] results = this.Invoke("Add", new object[] {x,y}); return ((System.Double)(results[0])); } public System.IAsyncResult BeginAdd(System.Double x, System.Double y, System.AsyncCallback callback, object asyncState) { return this.BeginInvoke("Add", new object[] {x, y}, callback, asyncState); } public System.Double EndAdd(System.IAsyncResult asyncResult) { object[] results = this.EndInvoke(asyncResult); return ((System.Double)(results[0])); }
This complex code is produced by the WSDL tool to build the proxy DLL you will need when you build your client. The file uses attributes extensively, but with your working knowledge of C# you can extrapolate at least how some of it works.
The file starts by declaring the Service1 class that derives from the class SoapHttpClientProtocol, which occurs in the namespace called System.Web.Services.Protocols:
public class Service1 : System.Web.Services.Protocols.SoapHttpClientProtocol
The constructor sets the URL property inherited from SoapHttpClientProtocol to the URL of the .asmx page you created earlier.
The Add( ) method is declared with a host of attributes that provide the SOAP goo to make the remote invocation work.
The WSDL application has also provided asynchronous support for your methods. For example, for the Add( ) method, it also created BeginAdd( ) and EndAdd( ). This allows you to interact with a web service without performance penalties.
To build the proxy, place the code generated by WSDL into a C# Library project in Visual Studio .NET and then build the project to generate a DLL. Be sure to write down the location of that DLL, as you will need it when you build the client application.
To test the web service, create a very simple C# Console application. The only trick is that in your client code, you need to add a reference to the proxy DLL just created. Once that is done, you can instantiate the web service, just like any locally available object:
WSCalc.Service1 theWebSvc = new WSCalc.Service1( );
You can then invoke the Pow( ) method as if it were a method on a locally available object:
for (int i = 2;i<10; i++) for (int j = 1;j <10;j++) { Console.WriteLine( "{0} to the power of {1} = {2}", i, j, theWebSvc.Pow(i, j)); }
This simple loop creates a table of the powers of the numbers 2 through 9, displaying for each the powers 1 through 9. The complete source code and an excerpt of the output is shown in Example 15-7.
using System; // driver program to test the web service public class Tester { public static void Main( ) { Tester t = new Tester( ); t.Run( ); } public void Run( ) { int var1 = 5; int var2 = 7; // instantiate the web service proxy WSCalc.Service1 theWebSvc = new WSCalc.Service1( ); // call the add method Console.WriteLine("{0} + {1} = {2}", var1, var2, theWebSvc.Add(var1, var2)); // build a table by repeatedly calling the pow method for (int i = 2;i<10; i++) for (int j = 1;j <10;j++) { Console.WriteLine("{0} to the power of {1} = {2}", i, j, theWebSvc.Pow(i, j)); } } } Output (excerpt): 5 + 7 = 12 2 to the power of 1 = 2 2 to the power of 2 = 4 2 to the power of 3 = 8 2 to the power of 4 = 16 2 to the power of 5 = 32 2 to the power of 6 = 64 2 to the power of 7 = 128 2 to the power of 8 = 256 2 to the power of 9 = 512 3 to the power of 1 = 3 3 to the power of 2 = 9 3 to the power of 3 = 27 3 to the power of 4 = 81 3 to the power of 5 = 243 3 to the power of 6 = 729 3 to the power of 7 = 2187 3 to the power of 8 = 6561 3 to the power of 9 = 19683
Your calculator service is now more available than you might have imagined (depending on your security settings) through the web protocols of HTTP-Get, HTTP-Post, or SOAP. Your client uses the SOAP protocol, but you could certainly create a client that would use HTTP-Get:
In fact, if you put that URL into your browser, the browser will respond with the following answer:
<?xml version="1.0" encoding="utf-8"?> <double xmlns="">45</double>
The key advantage SOAP has over HTTP-Get and HTTP-Post is that SOAP can support a rich set of datatypes, including all of the C# intrinsic types (int, double, etc.), as well as enums, classes, structs, and ADO.NET data sets, and arrays of any of these types.
Also, while HTTP-Get and HTTP-Post protocols are restricted to name/value pairs of primitive types and enums, SOAP's rich XML grammar offers a more robust alternative for data exchange.
|
http://etutorials.org/Programming/Programming+C.Sharp/Part+II+Programming+with+C/Chapter+15.+Programming+Web+Forms+and+Web+Services/15.9+Creating+the+Proxy/
|
CC-MAIN-2017-22
|
refinedweb
| 979
| 56.45
|
The major approach to this problem is to consider how many lights are on for hours and how many are for minutes.
The max number of lights for hours is min(num, 3), because hour <= 11.
The min number of lights for hours is max(0, num-5) because minutes <= 59, so only 5 lights for minutes at most.
Then we need a function to return all possible numbers for binary numbers with n digits where m of them are ones.
def possibleNums(n, m): '''Return all possible nums of n digits with m equal to one''' if m == n: return [pow(2, n)-1] if m == 0: return [0] zero = possibleNums(n-1, m) one = possibleNums(n-1, m-1) one = [z + pow(2, n-1) for z in one] return zero + one
Then our main program will construct every possible hour and minute combination and output to answer:
min_hour = max(0, num-5) max_hour = min(3, num) for i in range(min_hour, max_hour+1): hours = possibleNums(4, i) hours = [h for h in hours if h <= 11] minutes = possibleNums(6, num-i) minutes = [m for m in minutes if m <= 59] for h in hours: for m in minutes: if m < 10: m = '0' + str(m) else: m = str(m) ans.append(str(h)+':'+m) # print hours, minutes return ans
My first time sharing solution, hope it helps!
|
https://discuss.leetcode.com/topic/61883/python-solution-beating-99-86-when-i-submitted
|
CC-MAIN-2017-34
|
refinedweb
| 230
| 62.55
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.