text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
VanL wrote: > Hello, > > I was recently solving a problem which required that the config file > (just a python file) lie outside of the pythonpath. Is it possible to > do symbolic importation of a module? > > For example, I had: > > # cfgpath is passed in on the command line > cfgpath = os.path.basename(cfgpath) > sys.path.insert (0, cfgpath) > import sc_cfg #Hardcoded config module name! > > > How do you do this: > > # cfgpath is passed in on the command line > cfgpath, cfgfile = os.path.split(cfgpath) > sys.path.insert (0, cfgpath) > import cfgfile #Hardcoded config module name! > > So that any file name (ending with .py, of course, but that's another > matter) could be the configfile? > I tried the above, but I got an ImportError (no module named cfgfile) Basically it depends what you want to do with the file. If you simply want to execute it in-line (which given that it's a config file is most probably the case) then use execfile. This by default executes the file within the namespace of the caller, so is somewhat like an include or import statement in other languages. NOTE THE CAVEAT RE EXECUTION WITHIN A FUNCTION BODY. Check the manual page re manipulation of the namespace. If the file has classes you want to create or functions you want to call you should use __import__. The idea is to put your file (say myfile.py) in your search path. Then if you want to create an instance of myobject defined in myfile.py: modname='myfile' objname='myobject' _module=__import__(name) _class=getattr(_module,objname) _object=_class(instantiation parameters) where _object is the instance. Function invocation works in a parallel fashion. Once again, see the manual page re namespace manipulation. John > > Thanks, > > Van > >
https://mail.python.org/pipermail/python-list/2002-March/168449.html
CC-MAIN-2014-10
refinedweb
288
58.69
I'm using Qt to create a small application that displays a GUI and accepts input from a pipe. If the pipe is not created (or, as I understand, if there's no writer), the call to fopen show() fopen fopen connect(this, SIGNAL(window_loaded), this, SLOT(setupListener())); #include <QApplication> #include "metadataWindow.h" #include <sys/time.h> #include <sys/types.h> int main(int argc, char *argv[]) { QApplication app(argc, argv); metadataWindow window; window.showFullScreen(); window.setupListener(); return app.exec(); } metadataWindow::metadataWindow(QWidget *parent) : QWidget(parent) { this->setupUI(); // not shown here, but just basic QLabel stuff } void metadataWindow::setupListener() { const char *metadata_file = "/tmp/my-pipe-file"; // vvvvv This here is blocking vvvvvv FILE *fd = fopen(metadata_file, "r"); pipe = new QTextStream(fd); streamReader = new QSocketNotifier(fileno(fd), QSocketNotifier::Read, qApp); QObject::connect(streamReader, SIGNAL(activated(int)), this, SLOT(onData())); streamReader->setEnabled(true); } X is an asynchronous, message-based protocol. An X display server, and an X client program are constantly exchanging messages. An X client program doesn't just push some kind of a virtual button, draw its window, and calls it a day, until it wants to change something on the window. The only time there are no messages being exchanged between the display server and a client program is when absolutely nothing happens on the display. No mouse pointer movement. No display activity whatsoever. The task of showing a window involves a number of multiple steps, in sequence. The actual window object itself gets created. All subwindows get created. All windows get mapped. Mapping the window results in the X server sending a series of exposure events to the client program, in response to which the client program is responsible for rendering the exposed part of the window. All this is done as a sequence of hundreds of messages exchanged between the X display server, and an X client program. That's what the QApplication::exec() call does. It enters Qt's main event loop, with the Qt library processing X display events accordingly. Until the event loop runs, there will not be any visible display changes. The correct design pattern, when working with an event-based infrastructure like X/Qt, is also an event-based approach. You have two basic options. Execute your blocking application logic in a new thread, independently of the main execution thread that enters Qt's event loop. This bypasses and side-steps the need to conform to an event-driven design pattern, and makes it possible to do pretty much an ordinary program would do, without bothering Qt. Use an event-driven model, with non-blocking file descriptors, for your own code too. The fopen() library call cannot be used. Instead, the pipe would be open()ed in non-blocking mode, and when the other side of the filesystem pipe is opened, the pipe will be selectable for writing. Read the manual pages for open() and poll(), for more information. Finally read Qt's documentation for the QSocketNotifier class, which explains how to have the Qt library also monitor for events on your own file descriptors, as part of its main event loop, and invoke your code to handle the task of reading and writing them. Of course, a hybrid approach, of using both execution threads, and socket notifiers, is also possible. The important point is to understand how the process should work correctly, and never write any code that blocks Qt's main event loop.
https://codedump.io/share/5m2QZ1qopg6D/1/blocking-call-to-read-a-pipe
CC-MAIN-2017-34
refinedweb
573
54.93
Asked by: AddressFilter mismatch at the EndpointDispatcher I have WCF service. At beginning of developing my service, I created unit test against the service; add the server reference; I can call my service methods without any problem. Then I deployed my service to the our dev server box and changed the server reference to the dev box url in my unit test. After everything worked fine for dev box, I switched back my server reference to my local box in the unit test, now I get this "The message with To '' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree." error all the time using my local wcf service, but no problem calling my dev wcf service. I can fix the problem by adding service behavior to set AddressFilterMode = AddressFilterMode.Any However, I want to know why without setting AddressFilterMode.Any my local wcf service doesn't work? Where is the mismatch? How can I see the mismatch? I checked the web.config on my local service and on dev service is same. I also checked the client app.config, they are same except the server name is different. I also regenerated service proxy via VS Add Server reference. But nothing worked. Can anyone help me out here? Thanks in advance!!! Question All replies Hi, It seems the endpoint url used at client is not matching the one at the service-side. This is a common problem when the client access the service through an intermediate node When self-hosted, the EndpointAddress of the service comes from the IIS base address (host headers). In this case, it appears to be the "our-server" address. By default, WCF ensures that the To of each Message matches the intended address. If the service just has this one endpoint, a quick workaround would be to use [ServiceBehavior(AddressFilterMode=AddressFilterMode.Any)]which turns off the address filter. Otherwise, ensure the IIS host headers contain the 'public facing base address' that you want clients to connect to, so that messages will be properly addressed to the service. #ServiceBehaviorAttribute.AddressFilterMode: . Hope it can help you. Best Regards. Amy Peng MSDN Community Support | Feedback to us Develop and promote your apps in Windows Store Please remember to mark the replies as answers if they help and unmark them if they provide no help. In my case, there is no intermediate node. My unit test calls the WCF service directly. My point is that it worked at the beginning and suddenly it doesn't work anymore. My service is hosted in IIS. My service is deployed 2 different machines. One works, one doesn't - my unit test calls to the service on different machine. You said "By default, WCF ensures that the To of each Message matches the intended address. " How can I see the To of my Message via code? Thanks!!! I just realized when I ran the selected test, I didn't get AddressFilter mismatch error. My unit test passed successfully. However, when I debugged the selected test, I did get AddressFilter mismatch error. I also turned on svc log via web.config. When I ran the selected test, the log was generated. I looked at the Message tab, there are 4 actions related to PrepareShipment. First two use the correct service namespace; the third and fourth ones use tempuri.org as service namespace. Why? Here is the screen shot: Why debugging causes AddressFilter mismatch error? Why debugging failed, there is no svc log created. So I cannot see the To of the Message. Do we have other way to see To via code - client side code? Thanks!!!
https://social.msdn.microsoft.com/Forums/en-US/02a25957-43e8-43e3-9c1d-ed8226d929a5/addressfilter-mismatch-at-the-endpointdispatcher
CC-MAIN-2017-34
refinedweb
616
67.35
Sending html emails from org-mode with org-mime Posted October 29, 2016 at 02:33 PM | categories: orgmode, email | tags: | View Comments Table of Contents On the org-mode mailing list there was some discussion about sending html mail using orgmode. The support for this in mu4e is deprecated. There is the org-mime library though, and it supports a lot of what is needed for this. As I played around with it though, I came across some limitations: - I found equations were not rendered as images in the html, and files (in links) were not attached out of the box. I fixed that here. - I found it useful to modify the org-mime commands to leave point in the To: field when composing emails from org-buffers. - For use with mu4e, I created a function to open a message in org-mu4e-compose-org-mode, and added a C-cC-c hook to allow me to send it easily (here). This post documents some work I did figuring out how to send html mails. After some testing, some of these should probably be patched in org-mime. First, you need to require this library. (require 'org-mime) You can send a whole org buffer in html like with this command: org-mime-org-buffer-htmlize. Not all of the internal links work for me (at least in gmail). The default behavior leaves you at the end of the buffer, which is not too nice. We lightly modify the function here to leave in the To: field. (defun org-mime-org-buffer-htmlize () "Create an email buffer containing the current org-mode file exported to html and encoded in both html and in org formats as mime alternatives." (interactive) (org-mime-send-buffer 'html) (message-goto-to)) 1 From an org-headline in an org-file You can compose an email as an org-heading in any org-buffer, and send it as html. In an org-heading, you need to specify a MAIL_FMT property of html, e.g.: :PROPERTIES: :MAIL_FMT: html :END: Note the following properties can also be set to modify the composed email. (subject (or (funcall mp "MAIL_SUBJECT") (nth 4 (org-heading-components)))) (to (funcall mp "MAIL_TO")) (cc (funcall mp "MAIL_CC")) (bcc (funcall mp "MAIL_BCC")) Then, send it with org-mime-subtree Here I modify this function to leave me in the To: field. (defun org-mime-subtree () "Create an email buffer containing the current org-mode subtree exported to a org format or to the format specified by the MAIL_FMT property of the subtree." (interactive) (org-mime-send-subtree (or (org-entry-get nil "MAIL_FMT" org-mime-use-property-inheritance) 'org)) (message-goto-to)) Here are some sample elements to see if they convert to html reasonably. 1.1 Markup bold underlined italics strikethrough code Subscripts: H2O Superscripts: H+ An entity: To ∞ and beyond 1.2 Equations \(x^2\) \[x^4\] \(e^x\) 1.3 Tables 1.4 Lists A nested list. - one - Subentry under one. - two A definition list: - def1 - first definition A checklist: [ ]A checkbox Here is a numbered list: - number 1 - number 2 1.5 Code block import numpy as np import matplotlib.pyplot as plt t = np.linspace(0, 10) x = np.cos(t) * np.exp(-t) y = np.sin(t) * np.exp(-t) plt.plot(x, y) plt.savefig('spiral.png') 1.6 An image from somewhere other than this directory 1.7 Citations with org-ref 2 In a mail message You might prefer to do this directly in an email. Here is how you can do it in mu4e. I use this command to open a message in org-mode. The mode switches if you are in the header, or in the body. If you always do this, you could use a hook instead on message-mode. I do not want default html so I do not do it. (defun mu4e-compose-org-mail () (interactive) (mu4e-compose-new) (org-mu4e-compose-org-mode)) For sending, we will use org-mime to htmlize it, and add a C-c C-c hook function to send it. This hook is a little tricky, we want to preserve C-c C-c behavior in org-mode, e.g. in code blocks, but send it if there is no other C-c C-c action that makes sense, so we add it to the end of the hook. Alternatively, you could bind a special key for it, or run the special command. Note the C-c C-c hook only works in the body of the email. From the header, a plain text message is sent. (defun htmlize-and-send () "When in an org-mu4e-compose-org-mode message, htmlize and send it." (interactive) (when (member 'org~mu4e-mime-switch-headers-or-body post-command-hook) (org-mime-htmlize) (message-send-and-exit))) (add-hook 'org-ctrl-c-ctrl-c-hook 'htmlize-and-send t) Here is a way to do this for non-mu4e users. It doesn't have the nice mode switching capability though, so you lose completion in emails, and header specific functions. You can switch back to message-mode to regain those. (defun compose-html-org () (interactive) (compose-mail) (message-goto-body) (setq *compose-html-org* t) (org-mode)) (defun org-htmlize-and-send () "When in an org-mu4e-compose-org-mode message, htmlize and send it." (interactive) (when *compose-html-org* (setq *compose-html-org* nil) (message-mode) (org-mime-htmlize) (message-send-and-exit))) (add-hook 'org-ctrl-c-ctrl-c-hook 'org-htmlize-and-send t) 3 Equations and file attachments do not seem to work out of the box \(e^{i\pi} - 1 = 0\) Out of the box, org-mime does not seem to attach file links to emails or make images for equations.. Here is an adaptation of org-mime-compose that does that for html messages. (defun org-mime-compose (body fmt file &optional to subject headers) (require 'message) (let ((bhook (lambda (body fmt) (let ((hook (intern (concat "org-mime-pre-" (symbol-name fmt) "-hook")))) (if (> (eval `(length ,hook)) 0) (with-temp-buffer (insert body) (goto-char (point-min)) (eval `(run-hooks ',hook)) (buffer-string)) body)))) (fmt (if (symbolp fmt) fmt (intern fmt))) (files (org-element-map (org-element-parse-buffer) 'link (lambda (link) (when (string= (org-element-property :type link) "file") (file-truename (org-element-property :path link))))))) (compose-mail to subject headers nil) (message-goto-body) (cond ((eq fmt 'org) (require 'ox-org) (insert (org-export-string-as (org-babel-trim (funcall bhook body 'org)) 'org t))) ((eq fmt 'ascii) (require 'ox-ascii) (insert (org-export-string-as (concat "#+Title:\n" (funcall bhook body 'ascii)) 'ascii t))) ((or (eq fmt 'html) (eq fmt 'html-ascii)) (require 'ox-ascii) (require 'ox-org) (let* ((org-link-file-path-type 'absolute) ;; we probably don't want to export a huge style file (org-export-htmlize-output-type 'inline-css) (org-html-with-latex 'dvipng) (html-and-images (org-mime-replace-images (org-export-string-as (funcall bhook body 'html) 'html t))) (images (cdr html-and-images)) (html (org-mime-apply-html-hook (car html-and-images)))) (insert (org-mime-multipart (org-export-string-as (org-babel-trim (funcall bhook body (if (eq fmt 'html) 'org 'ascii))) (if (eq fmt 'html) 'org 'ascii) t) html) (mapconcat 'identity images "\n"))))) (mapc #'mml-attach-file files))) 4 Summary This makes it pretty nice to send rich-formatted html text to people. Copyright (C) 2016 by John Kitchin. See the License for information about copying. Org-mode version = 8.3.5
http://kitchingroup.cheme.cmu.edu/blog/2016/10/29/Sending-html-emails-from-org-mode-with-org-mime/
CC-MAIN-2020-05
refinedweb
1,264
61.56
evaluvating variable inside a function while integrating I got a strange problem. The code to reproduce the problem is given below from scipy.constants import h, c, k def T2(x): a=11717 if x < 21500 : return a*(x**-0.53) else : print(x) # Just for debugging return a*(x**-0.75) # Blackbody Planky function def B(Lambda,Temp): return 2*h*c**2/(Lambda**5 *(exp(h*c/(Lambda*k*Temp))-1)) def flux2(Lambda): return numerical_integral(2*pi*x*B(Lambda,T2(x)),7,2150)[0] print 2.5*log(flux2(9000*10**-10)) The problem is inside the T2() function. Since the integral in x is from 7 to 2150, The if condition should get satisfied. and return the a(x*-0.53) . But instead it is evaluating the else condition. print x is printing the alphabet 'x' instead of the value of variable x it is supposed to take during each point in integral. I guess i have understood how these functions work inside an integral wrongly. What is it that I am doing wrong here? Update: I instead tried the Piecewise() function to define T2() as follows, a=11717 T2= Piecewise([[(0,215),a*(x**-0.53)],[(215,21500),a*(x**-0.75)]]) but inside the integral function I am getting ValueError ValueError: Value not defined outside of domain.
https://ask.sagemath.org/question/9627/evaluvating-variable-inside-a-function-while-integrating/?sort=latest
CC-MAIN-2020-45
refinedweb
224
59.3
What IS wrong with my code...I cant figure it out Its meant to read a String in the do..while loop and get the char at the 0 position Code java: /*. This is not an assignment. Just an exercise in the textbook am using to learn java */ import java.util.Scanner; public class Mileage { public static void main(String[]args) { // initializing variables and Scanner object Scanner scan = new Scanner(System.in); int milesDriven; int gallonsUsed; int totalMiles = 0; int totalGallons = 0; double MPG, totalMPG = 0.0, averageMPG; int tankful = 1; // sentinel chosen to be a String...allows for a more user friendly interface String sentinelString = ""; // initialized to an empty String char sentinelChar = '0'; // initialized to 0 System.out.println("\nGood day.\n"); while (sentinelChar != 'N' || sentinelChar != 'n') { System.out.print("Please enter the number of miles driven on Tankful "+tankful+" : "); milesDriven = scan.nextInt(); System.out.print("Please enter the number of gallons consumed on Tankful "+tankful+" : "); gallonsUsed = scan.nextInt(); MPG = (double)milesDriven/gallonsUsed; totalMiles += milesDriven; totalGallons += gallonsUsed; System.out.printf("\nThe miles per gallon on Tankful %d is %.2f.",tankful,MPG); totalMPG += MPG; tankful++; // This portion of my code is not workin and I dont know why. Has to do with the Scanner...i guess do { System.out.print("\nDo you want to enter for more Tankfuls? (Y/N) : "); sentinelString = scan.next(); scan.nextLine(); // the other scan is to remove any extra Strings sentinelChar = sentinelString.charAt(0); if (sentinelChar != 'Y' || sentinelChar != 'N' || sentinelChar != 'y' || sentinelChar != 'n' ) { System.out.println("That entry is invalid."); } } while(sentinelChar != 'Y' || sentinelChar != 'N' || sentinelChar != 'y' || sentinelChar != 'n' ); } System.out.printf("\nThe miles per gallon for all %d Tankfuls is %.2f"); } } Thanks guys.
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/13442-beginner-program-doesnt-work-printingthethread.html
CC-MAIN-2013-48
refinedweb
278
53.17
I've got a checkbox that I want to display on my view related to a field called public, which basically says whether the particular row is public or not. In the database this is a bit field, but it allows nulls, due to how the table previously used to work. I'm using Html.CheckBoxFor but it is complaining about this field because in the system it is not a bool type, but rather a bool? type. What I want to do is have it so that if the field value is null, then it counts as a false on the front end (unfortunately updating the database values themselves is not an option). I have tried using the GetValueOrDefault, and putting a default value in my model file along the lines of: public class Model { public bool? Public { get; set; } public SearchModel() { Public = false; } } An exception of type 'System.InvalidOperationException' occurred in System.Web.Mvc.dll but was not handled in user code Additional information: Templates can be used only with field access, property access, single-dimension array index, or single-parameter custom indexer expressions. Html.CheckBoxFor(model => model.Public, new {data_toggle = "toggle", data_off = "No", data_on = "Yes", data_size = "small"}) The specific exception you're getting occurs when you pass an expression to one of the templated helpers that can't be evaluated. Bear in mind that when you're using the expression-based helpers, you're not actually passing a property by value but rather an expression that represents a property on your model and which the helper will use to reference that property, generate field names from, etc. You haven't shown the actual code where you're doing this, but this means essentially you can't do something like: @Html.EditorFor(m => m.Public.GetValueOrDefault()) Because the templated helper cannot resolve that as an expression that matches up with a property on your model. As to your actual base concern here, namely setting the value to false if it's null, you just need a custom getter and setter. @utaco's answer provides the new easier C# 6.0 method of auto-implemented properties with defaults: public bool? Public { get; set; } = false; For previous versions of C#, you need the following: private bool? public; public bool? Public { get { return public ?? false; } set { public = value; } } However, keeping Public as a nullable bool when you have no intention of it ever actually being null just makes your code more difficult. Assuming you can change that to just bool (i.e. this is a view model and not the actual entity class tied to your database table), then you should do so. You still want to keep the private as a nullable though. That allows you accept nulls in the setter but coerce them into false values in the getter, meaning the actual value of public will always be either true or false, i.e. not null.
https://codedump.io/share/qLFSwG512QNf/1/mvc-use-htmlcheckboxfor-with-nullable-bool
CC-MAIN-2018-26
refinedweb
487
60.65
This article is the continuation of the article published last month with almost the same name just with one key word replacement. But it is a replacement of fundamental importance. The replacement MFC to Win32 comprehends the great changes in code. And a lot of people prefer Win32 code building the project provided, it is highly recommended to have a look to the Demo presentation enclosed in order to get an idea of the output expected. The executable WIN_GL.exe in the WIN_GLdemo directory has been built with MSVS-2015 pro using. style="width: 640px; height: 342px" alt="Image 2" data-src="/KB/cpp/1191056/X4_UFO.jpg" class="lazyload" data-sizes="auto" data-> width="487px" alt="Image 3" data-src="/KB/cpp/1191056/lesson). Most of the keyboard and mouse commands remain the same as per the original lessons corresponding to the NeHe site. The controls of the Help Dialog are as follows: The Help Dialog is unmodal, therefore, it is possible both to use the keyboard commands listed or to select the command from the List Control and press the Ok button (or double click mouse left button). With regards to the respected authors of the original code, the About Dialog (Menu Help->About...) contains the original NeHe's ReadMe texts: width="583px" alt="Image 5" data-src="/KB/cpp/1191056/about.jpg" class="lazyload" data-sizes="auto" data-> The Joystick using is available in). width="484px" alt="Image 6" data-src="/KB/cpp/1191056/JoystickTest.jpg" class="lazyload" data-sizes="auto" data-> The controls of the Joystick Test Dialog are as follows: While the Joystick Test Dialog is visible, all the Joystick commands are valid just for this dialog only and for the application itself, the Joystick commands are not available. Please note that the Joystick Test Dialog is based on my CodeProject article "Joystick Win32 and MFC Projects" just with Picture Control removed. The help on the Joystick for the lessons affected to be obtained with Menu Help->Joystick Help Dialog command (or keyboard Ctrl+Y or Joystick help button in the Help dialog) with the Joystick Help dialog (where applicable). The Joystick Help Dialog is unmodal, therefore, it is possible both to use the Joystick commands listed or to select the command from the list and press Ok button (or double click mouse left button at the corresponding line). Before you start to build all the projects with the menu Project->Build Solution with the WIN_GLproj\WIN_GLproj\WIN_GLproj.sln file, please take into consists of three similar projects that differ just by the degree of building complexity: WIN_GL0 WIN_GL1 WIN_GL Even if you are working for the first time with MSVS, just appoint the WIN_GL0 project as StartUp project (right mouse button click -> menu Set as StartUp Project) and select menu Debug->Start without debugging and the program WIN_GL0.exe should start building and working. WIN_GL0 The project provided has been developed with MSVS-2015 pro using Joystick implementation. The problem is that external library hid.lib is used. Before you start to build the project WIN_GL1 with the instruments of MSVS-2010, you must ensure three things in the project concerned: #include <hidsdi.h> Lesson 43 requires the external library of freetype.lib which requires the installation of the Program Files\GnuWin32 directory(17.8 Mb) from link. Before you start building the project WIN_GL with Lesson 43, you must ensure four things in the project concerned: Lesson 47 requires the external libraries of cg.dll and cgGL.dll which require the installation of the Program Files\NVIDIA Corporation directory(596 Mb) from link. Before you start building the WIN_GL project adapting first glance, it seemed that there are too many files enclosed, but further consideration shows that everything is not too complicated. The project consists of the common procedures and special procedures for every lesson implementation. All three projects, WIN_GL0, WIN_GL1, WIN_GL have similar have been combined by the author from the original lessons 07, 11, 30, 32, 34 for demo purposes. X5_BoxmanDrawInit.cpp has been developed by the author for demo purposes. Just a few changes have been made in XX_*DrawInit.cpp codes from the previous article due to the platform change from MFC to Win32. Special source code for the separate lessons in the WIN_GLproj\NeHeProgs\XX_Spec paths (XX in range 01-48) have been borrowed from have been developed by the author. Just a few changes have been made in Special source codes files from the previous article due to the platform change from MFC to Win32; Source codes in WIN_GLproj\GlobUse path have been developed by the author based on the standard Win32 Application Wizard technologies: Glaux.lib Just a few changes have been made in GlobDrawGL.cpp from the previous article due to the platform have been created by the author. Nothing has been changed in Data Files from the previous has been changed in ReadMe texts from the previous article. Resource files WIN_GLproj path have been created borrowed from the previous article with a few changes regarding the name of the application: The control flow-chart of the standard OpenGL application with the steps numerated as per items of these recently are are to be called while initialization procedure InitGLWindow. The CreateGLWindow procedure has almost completely has been borrowed from the original NeHe lessons and as much as possible, I've tried not to change the authoring codes. CreateGLWindow Initialization procedure to be called in the procedure CreateGLWindow: if (!InitGL()) // Initialize Our Newly Created GL Window { KillGLWindow(); // Reset The Display MessageBox(NULL, _T("Initialization Failed."), _T("ERROR"),MB_OK|MB_ICONEXCLAMATION); exit(0); } InitGL initialization of the OpenGL window procedure (located in GlobUse\GlobDrawGL.cpp) refers to the initialization procedure InitGL_XX(GLvoid) of the current lesson located in the NeHeDrawInit\XX_*DrawInit.cpp. These initialization procedures have been borrowed from the NeHe site. I've tried not to change this code as far as possible. InitGL InitGL_XX(GLvoid) The only big difference is that I've taken the liberty to exclude is the Glaux.lib library from all the lessons used for the texture initialization. Nevertheless, I hope the authors of the original code will not complain. Surely they could not expect MS VS developers that the Glaux.lib library to be excluded from 10th version onwards. And if it is still possible to include Glaux.lib in the MS VS 10th version in the 11th version, it is not working. The procedures for the texture initialization instead of Glaux.lib are located in the Globuse\GlobTexture.cpp. The instructions for the texture initialization may be obtained from my former CodeProject articles "Texture Mapping in OpenGL using Class CImage" and "Masking Texture in OpenGL using Class CImage". Procedure ResizeGLScene is common to all the lessons to be called with the WM_SIZE message in the WndGLProc: ResizeGLScene case WM_SIZE: // Resize The OpenGL Window { ReSizeGLScene(LOWORD(lParam), HIWORD(lParam)); // LoWord=Width, HiWord=Height return 0; // Jump Back } In distinct from the NeHe original base some global variables inserted: GLfloat fNearPlane = 0.1f, //Frustum Near Plane fFarPlane = 100.f, //Frustum Far Plane fViewAngle = 45.f, //Frustum View Angle fAspect; //Frustum Width/Height ratio int m_boxWidth = 800; //GL window width int m_boxHeight = 600; //GL window height In some lessons (30,31,34,36,44,45,46,X4), the Frustum values set differ from default ones in the InitGL_XX procedure. InitGL_XX In lessons 21 and 24, the glOrtho presentation has been and VkKeyUpN keyboard commands handling procedures(located in GlobUse\GlobDrawGL.cpp) refer to the keyboard procedure ProcessKeyboard_XX(int idKey) and VkKeyUp_XX of the current lesson located in the NeHeDrawInit\XX_*DrawInit.cpp. These keyboard commands have been borrowed from the NeHe site. I've tried not to change these commands as far as possible. Just F1 command I've changed for Help performance Update_XX If the new lesson in the Lesson Dialog is selected to start at the same window the memory must be released and the OpenGL window to be closed. Also before the application exit, the memory must be released and the OpenGL window must be closed: case ID_APP_EXIT: Deinitialize(); KillGLWindow(); DestroyWindow(hWnd); break; DeinitializeN procedure for clear memory (located in GlobUse\GlobDrawGL.cpp) refers to the procedure Deinitialize_XX of the current lesson located in the NeHeDrawInit\XX_*DrawInit.cpp. These clear memory procedures have been borrowed from the NeHe site. I've tried not to change this procedure as much as possible. DeinitializeN Deinitialize_XX The procedure KillGLwindow is common to all the lessons and located in the GLRoutine.cpp file in the common GlobUse directory. The KillGLWindow procedure almost completely has been borrowed from the original NeHe lessons and as much as possible, I've tried not to change the authoring codes. KillGLwindow KillGLWindow Joystick handling available in the lessons 09, 10, 32, 40, X4, X5 in projects, WIN_GL and WIN_GL1. In the WIN_GL0 project, the Joystick functions are not available. The next two steps require to make WIN_GL0 working with the Joystick as in WIN_GL1: void InitJoystickDlg(void){} And that's all (point 2.1 of this article conditions understood). Now the project should HandleJoystick_XX void HandleJoystick_40(int nx, int ny, int nz, int nzr, int nh, BOOL bButtonStates[], BOOL bButtonPrev[]) { ropeConnectionVel = Vector3D(-0.003f*nzr, 0.003f*nx, 0.003f*ny); } You may change this existing HandleJoystick_XX code or create yourself as you like. Lessons 43 and 47 require some External Libraries code, if any, will be highly appreciated). Present three projects of the solution provided have been developed one from another just by renaming with the technology above. The projects above have been developed with the great help of the editor of the Russian site of NeHe Sergey Anisimov whom I thank very much for his kind assistance. Also many thanks to the respected authors of the original code and their code are more euphemistical for the program.
https://codeproject.freetls.fastly.net/Articles/1191056/OpenGL-Win-Projects-in-One?display=Print
CC-MAIN-2021-39
refinedweb
1,641
53.81
We can consider a preprocessor as a compilation process, which runs when the developer runs the program. It is a pre-process of execution of a program using c/c++ language. To initialize a process of preprocessor commands, it's mandated to define with a hash symbol (#). It can preferably be the non-blank character, and for better readability, a preprocessor directive should start in the first column. List of Preprocessor Directives To execute a preprocessor program on a certain statement, some of the preprocessor directives types are: - #define: It substitutes a preprocessor using macro. - #include: It helps to insert a certain header from another file. - #undef: It undefines a certain preprocessor macro. - #ifdef: It returns true if a certain macro is defined. - #ifndef: It returns true if a certain macro is not defined. - #if, #elif, #else, and #endif: It tests the program using a certain condition; these directives can be nested too. - #line: It handles the line numbers on the errors and warnings. It can be used to change the line number and source files while generating output during compile time. - #error and #warning: It can be used for generating errors and warnings. - #error can be performed to stop compilation. - #warning is performed to continue compilation with messages in the console window. - #region and #endregion: To define the sections of the code to make them more understandable and readable, we can use the region using expansion and collapse features. Process Flow of Preprocessor Directives - A developer writes a C program-> and the program checks if there are any preprocessor directives available. - If available, it will perform the action event of pre-processor and the compiler will generate the object code. The code will then be executed by the linker. - In case no preprocessor directives are available, it will go to the compiler. The compiler will then generate the object code followed by execution of the code by linker. Here in the article, we will look at the various examples of preprocessor directives. Four Major Types of Preprocessor Directives 1. Macro Expansion In Macro expansion, we can specify two types of Macros with arguments: We can also pass arguments to macros; it can be described with arguments that perform similarly as functions. Syntax: #define name substitute text Where, - name: Here, we can define the micro template. - replacement text : we can define it as the macro expansion. - To write a macro name, we need to use capital letters. - For better readability, we can define suitable names on certain macros. - To modify program: We can change only the macro and it can reflect on the program. Hence we do not need to change it every time. Example: Basic Macro #define PrintLOWER 50 void main() { int j; for (j=1;i<=PrintLOWER; j++) { printf("\n%d", j); } } Example: Macros With Certain Arguments #define AREA(a) (5.18 * a * a) void main() { float r = 3.5, x; x = AREA (r); printf ("\n Area of circle = %f", x); } Types of Predefined Macros - ___TIME___ defines the current time using a character literal in “HH:MM: SS” format. - ___STDC___ specifies as 1 when the compiler complies with the ANSI standard. - ___TIMESTAMP___ specifies the timestamp “DDD MM YYYY Date HH:MM: SS”. It is used to define the date and time of the latest modification of the present source file. - ___LINE___ consists of a present line number with a decimal constant. - ___FILE___ includes the current filename using a string literal. - ___DATE___ shows the present date with a character literal in the “MMM DD YYYY” format. 2. File Inclusion For file inclusion, we can use the #include. Syntax: #include TypeYourfilename - We can replace the content that is comprised in the filename at the point where a certain directive is written. - We can use a file inclusive directive and include the header files in the programs. - We can integrate function declaration, macros, and declarations of the external variables in the top header file rather than repeating each logic in the program. - The stdio.h header file includes the function declarations and can provide the details about the input and output. Example of the file inclusion statement: - i) #include “DefineYourfile-name”: In this example, we can search a file within the current working directory through easy search. - ii) #include <DefineYourfile-name>: In this example, we can define a certain directory, and we search a file within it. 3. Conditional Compilation - We can use conditional compilation on a certain logic where we need to define a condition logics. - It utilizes directives like #if, #elif, #else, and #endif. Syntax: #if TESTYourCondition <= 8 statement 1; statement 2; statement 3; statement 4; #else statement 5; statement 6; statement 7; #endif 4. Miscellaneous Directives There are two miscellaneous directives apart from the above directives that are not commonly used. - #undef: We can use this directive with the #define directive. It is used to undefine a specified macro. - #pragma: We can use this directive on a certain level where we need to define turn-on or off specific features. We can use these types of directives for the compiler-specific, which has a certain range as complier to the compiler. Example of #pragma directives are discussed below: - #pragma startup and #pragma exit: These directives push to indicate the functions that are required to run before the program as a startup (before the control moves to main()) and before program exit (only before the control returns from main()). Code Example: #pragma Directives #include <yourincludefile> using namespace Emp; void Empfun1(); void Empfun2(); #pragma startup Empfun1 #pragma exit Empfun2 void Empfun1() { cout << "Print the logic on Empfun1()\n"; } void Empfun2() { cout << "Print the logic on Empfun2()\n"; } int main() { void Empfun1(); void Empfun2(); cout << "Print main output ()\n"; return 0; } Output: Print the logic on Empfun1() Print main output () Print the logic on Empfun2() #pragma warn Directive: Pragma directive is the certain directive that we can utilize to make it turn off and on to enable certain features. Pragma specifies the range from one compiler to another. Microsoft C compiler provides the pragmas that provide the listing and placing ways to give the comments in the object file that is generated by the compiler. Prgama has its custom-specific implementation rules directives that can be defined as per the certain scenario. pragma startup and #pragma exit: These pragma directives are defined as the certain functions that can be used to determine to run before program startup that can be specified before to main() and before program exit can be applied before the control returns from main(). Example: #include<stdio.h> void testFun1((); void testFun2(); #pragma startup testFun1 #pragma exit testFun2 void testFun1() { printf("It is Inside the testFun1()\n"); } void testFun2() { printf("It is Inside the func2()\n"); } int main() { printf("It is Inside the function of main()\n"); return 0; } Output It is Inside the testFun1() It is Inside the function of main() It is Inside the func2() Advance your career as a MEAN stack developer with the Full Stack Web Developer - MEAN Stack Master's Program. Enroll now! Conclusion We hope this article helped you understand the Preprocessor Directives. In this article, we discussed the various definitions and techniques of the Preprocessor Directives with the sample code using c# or C++/C programming. We also learned their types with respective syntax and example. This article will be helpful to professional developers from application development from the .net and JAVA backgrounds, application architectures, cloud specialists, testers, and other learners looking for different uses of various types of mechanisms of Spring Data. Besides pursuing varied courses provided by Simplilearn, you can also sign up on our SkillUp platform, a Simplilearn initiative, which offers multiple free online courses to help learners understand the basics of numerous programming languages, including Preprocessor Directives in C or C++. You can also opt for our Full-stack Web Development Certification Course to improve your career prospects by mastering both backend and frontend with other tools like SpringBoot, AngularMVC, JSPs, and Hibernate.
https://www.simplilearn.com/tutorials/c-tutorial/c-preprocessor-directives
CC-MAIN-2022-21
refinedweb
1,326
52.19
Here at Plataformatec we use Github Pull Requests a lot for code review and this usually yields tons of constructive comments and excellent discussions from time to time. One of the recent topics was about whether we should use scopes or class methods throughout the project to be consistent. It’s also not hard to find discussions about it all over the internet. The classic comment usually boils down to “there is no difference between them” or “it is a matter of taste”. I tend to agree with both sentences, but I’d like to show some slight differences that exist between both. Defining a scope First of all, lets get a better understanding about how scopes are used. In Rails 3 you can define a scope in two ways: class Post < ActiveRecord::Base scope :published, where(status: 'published') scope :draft, -> { where(status: 'draft') } end The main difference between both usages is that the :published condition is evaluated when the class is first loaded, whereas the :draft one is lazy evaluated when it is called. Because of that, in Rails 4 the first way is going to be deprecated which means you will always need to declare scopes with a callable object as argument. This is to avoid issues when trying to declare a scope with some sort of Time argument: class Post < ActiveRecord::Base scope :published_last_week, where('published_at >= ?', 1.week.ago) end Because this won’t work as expected: 1.week.ago will be evaluated when the class is loaded, not every time the scope is called. Scopes are just class methods Internally Active Record converts a scope into a class method. Conceptually, its simplified implementation in Rails master looks something like this: def self.scope(name, body) singleton_class.send(:define_method, name, &body) end Which ends up as a class method with the given name and body, like this: def self.published where(status: 'published') end And I think that’s why most people think: “Why should I use a scope if it is just syntax sugar for a class method?”. So here are some interesting examples for you to think about. Scopes are always chainable Lets use the following scenario: users will be able to filter posts by statuses, ordering by most recent updated ones. Simple enough, lets write scopes for that: class Post < ActiveRecord::Base scope :by_status, -> status { where(status: status) } scope :recent, -> { order("posts.updated_at DESC") } end And we can call them freely like this: Post.by_status('published').recent # SELECT "posts".* FROM "posts" WHERE "posts"."status" = 'published' # ORDER BY posts.updated_at DESC Or with a user provided param: Post.by_status(params[:status]).recent # SELECT "posts".* FROM "posts" WHERE "posts"."status" = 'published' # ORDER BY posts.updated_at DESC So far, so good. Now lets move them to class methods, just for the sake of comparing: class Post < ActiveRecord::Base def self.by_status(status) where(status: status) end def self.recent order("posts.updated_at DESC") end end Besides using a few extra lines, no big improvements. But now what happens if the :status parameter is nil or blank? Post.by_status(nil).recent # SELECT "posts".* FROM "posts" WHERE "posts"."status" IS NULL # ORDER BY posts.updated_at DESC Post.by_status('').recent # SELECT "posts".* FROM "posts" WHERE "posts"."status" = '' # ORDER BY posts.updated_at DESC Oooops, I don't think we wanted to allow these queries, did we? With scopes, we can easily fix that by adding a presence condition to our scope: scope :by_status, -> status { where(status: status) if status.present? } There we go: Post.by_status(nil).recent # SELECT "posts".* FROM "posts" ORDER BY posts.updated_at DESC Post.by_status('').recent # SELECT "posts".* FROM "posts" ORDER BY posts.updated_at DESC Awesome. Now lets try to do the same with our beloved class method: class Post < ActiveRecord::Base def self.by_status(status) where(status: status) if status.present? end end Running this: Post.by_status('').recent NoMethodError: undefined method `recent' for nil:NilClass And :bomb:. The difference is that a scope will always return a relation, whereas our simple class method implementation will not. The class method should look like this instead: def self.by_status(status) if status.present? where(status: status) else all end end Notice that I'm returning all for the nil/blank case, which in Rails 4 returns a relation (it previously returned the Array of items from the database). In Rails 3.2.x, you should use scoped there instead. And there we go: Post.by_status('').recent # SELECT "posts".* FROM "posts" ORDER BY posts.updated_at DESC So the advice here is: never return nil from a class method that should work like a scope, otherwise you're breaking the chainability condition implied by scopes, that always return a relation. Scopes are extensible Lets get pagination as our next example and I'm going to use the kaminari gem as basis. The most important thing you need to do when paginating a collection is to tell which page you want to fetch: Post.page(2) After doing that you might want to say how many records per page you want: Post.page(2).per(15) And you may to know the total number of pages, or whether you are in the first or last page: posts = Post.page(2) posts.total_pages # => 2 posts.first_page? # => false posts.last_page? # => true This all makes sense when we call things in this order, but it doesn't make any sense to call these methods in a collection that is not paginated, does it? When you write scopes, you can add specific extensions that will only be available in your object if that scope is called. In case of kaminari, it only adds the page scope to your Active Record models, and relies on the scope extensions feature to add all other functionality when page is called. Conceptually, the code would look like this: scope :page, -> num { # some limit + offset logic here for pagination } do def per(num) # more logic here end def total_pages # some more here end def first_page? # and a bit more end def last_page? # and so on end end Scope extensions is a powerful and flexible technique to have in our toolchain. But of course, we can always go wild and get all that with class methods too: def self.page(num) scope = # some limit + offset logic here for pagination scope.extend PaginationExtensions scope end module PaginationExtensions def per(num) # more logic here end def total_pages # some more here end def first_page? # and a bit more end def last_page? # and so on end end It is a bit more verbose than using a scope, but it yields the same results. And the advice here is: pick what works better for you but make sure you know what the framework provides before reinventing the wheel. Wrapping up I personally tend to use scopes when the logic is very small, for simple where/order clauses, and class methods when it involves a bit more complexity, but whether it receives an argument or not doesn't really matter much to me. I also tend to rely more on scopes when doing extensions like showed here, since it's a feature that Active Record already gives us for free. I think it's important to clarify the main differences between scopes and class methods, so that you can pick the right tool for the job™, or the tool that makes you more comfortable. Whether you use one or another, I don't think it really matters, as long as you write them clear and consistently throughout your application. Do you have any thought about using scopes vs class methods? Make sure to leave a comment below telling us what you think, we'd love to hear. Nice writeup. I still use scopes as it’s just felt like the write thing, and these are great examples of why I believe I’m correct 🙂 Thanks for sharing. Nice post. Scopes should generally be avoided. You’ll be surprised to know that when you chain scopes dealing with the same column, like: `Post.published_before(date).published_after(date)` – the 2nd scope will completely override the first one, which is something that does not happen when using class methods. Yeah I’m aware of it, but I intentionally decided to left this bit out of the blog post because it’d generate more confusion than show a point, and it’s likely something that will change in Rails 4 – there are related issues opened about it, and I hope to be able to work on them if no one is able to do before me. So I’d not say scopes should be completely avoided, I’d rather say this is a bug and will hopefully get fixed :). Anyway, thanks for sharing this information with others here as well. Good article, thanks From what I understand there is no way to avoid this issue if you want to keep `default_scope` in. Only way is to remove the `default_scope`, which I’m all for doing, but not really sure if it will happen. Nice article. Thanks for sharing Nice article, I’ve had tackled this problem in the past. It’d be cool to mention how to return a relation from a class method as well, so you could keep it chainable instead of returning nil. Hey Chuck, thanks for your feedback. The last class method example in the chainable section adds a condition that returns a relation with “all” in case the given status is blank. In Rails 4, “all” returns a relation and not an array of records anymore, whereas in Rails 3 you should use “scoped” to receive a relation back. In short, this example should always return a relation making the method always chainable. Yeah, as you I think that default_scope is not going anywhere any time soon. But the problem of chaining two scopes that deal with the same condition is something I think we should be able to handle, lets see where it goes. Cheers. Considering that’s a logical consequence of scoping on the same field twice, I don’t think that’s an argument against scopes. Are you saying it should raise an error instead, and doesn’t in order to support default_scope overriding? > Considering that’s a logical consequence of scoping on the same field twice It’s not a logical consequence. I would expect them to work together the same way as the do when you normally chain Arel methods. All I’m saying is `default_scope` with all its quirks should be removed and then this changed. Can someone explain why scopes are working weird when you inherit from STI class with scopes. For ex: class User scope :active, where(…) class Admin < User Admin.active.to_sql will have extra condition of type: select * from users where type in ('Admin', 'User').. maybe I do not understand the point, when I write two scopes of type scope :created_after, -> date { where(“created_at > ? “, date )} scope :created_before, -> data { where(“created_at ‘2013-01-17’ ) AND (created_at < '2013-02-03' ) Gio Good post, thanks 🙂 Yeah, my example wasn’t too good. But the issue is still there, just it manifests itself a little bit differently. Here are the details: Oleg, I’ve never seem this before, but if you think it’s a Rails issue I’d ask you to open an issue giving as much information as possible, with more code examples showing your issue there. Thanks! Yup, I agree that we should make scopes work the same as chaining Arel methods, being combined in the same field instead of overriding the condition. And the same should happen for default_scope, it should combine using scopes or where methods, so that to override any condition you should be very specific about it.
http://blog.plataformatec.com.br/2013/02/active-record-scopes-vs-class-methods/comment-page-1/
CC-MAIN-2021-25
refinedweb
1,953
64.1
Documentation The friendly Operating System for the Internet of Things 6LoWPAN definitions for Network interface API More... 6LoWPAN definitions for Network interface API Definition in file 6lo.h. #include <stdbool.h> #include <stdint.h> This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead. Go to the source code of this file. Like the the capability flags in the 6LoWPAN Capability Indication Option (6CIO) are less about hardware capabilities than about the implementation status within the network. For the flags in this group it is currently undefined how to exchange the capabilities between nodes, but they might be added to the 6CIO at a later point. Once the 6CIO is implemented in GNRC and the flag is supported by it, the corresponding flag in these local flags can be removed. Selective Fragment Recovery enabled. Definition at line 46 of file 6lo.h.
https://api.riot-os.org/6lo_8h.html
CC-MAIN-2022-33
refinedweb
147
56.96
In my pervious blog, I explained how modal can be used in functional components. My obsession with modal continues so now, I'll go over one of the ways you can use modal in Class Components! First start off with basic react class component: import React, { Component } from 'react' class ModalInClassComponents extends Component { render() { return ( <div> </div> ) } } export default ModalInClassComponents; Now, in your terminal, you want to install: npm install react-responsive-modal and import modal and style.css in your component: import { Modal } from 'react-responsive-modal'; import 'react-responsive-modal/styles.css'; Create a state for the modal to stay closed initially: state ={ openModal : false } Create a button with onClick attribute. We will call function when the button is clicked which sets the openModal state to true. <button onClick={this.onClickButton}>Click Me</button> onClickButton = e =>{ e.preventDefault() this.setState({openModal : true}) } Now, we need to use the Modal component and add two attributes: open and onClose. open is set to this.state.openModal, so the modal opens when the state is true. onClose works the same way as onClick, however, in this case, we want to set the state back to false. <Modal open={this.state.openModal} onClose={this.onCloseModal}> //Here you can add anything you want to reveal when the button is clicked! <h1>You Did it!</h1> </Modal> onCloseModal = ()=>{ this.setState({openModal : false}) } And that's it! You should be able to view your modal now: I love modal because it adds a bit of oomph to your app and it's very simple and easy to use. The full code looks like this: import React, { Component } from 'react' import { Modal } from 'react-responsive-modal'; import 'react-responsive-modal/styles.css'; class ModalInClassComponents extends Component { state={ openModal : false } onClickButton = e =>{ e.preventDefault() this.setState({openModal : true}) } onCloseModal = ()=>{ this.setState({openModal : false}) } render() { return ( <div> <button onClick={this.onClickButton}>Click Me</button> <Modal open={this.state.openModal} onClose={this.onCloseModal}> <h1>You Did it!</h1> </Modal> </div> ) } } export default ModalInClassComponents; Thank you for making it till the end! Discussion
https://practicaldev-herokuapp-com.global.ssl.fastly.net/bhuma08/react-create-modal-using-class-components-5d1g
CC-MAIN-2020-50
refinedweb
343
58.58
The QPalette class contains color groups for each widget state. More... #include <qpalette.h> List of all member functions. The QPalette class contains color groups for each widget state. A palette consists of three color groups: active, disabled, and inactive. All widgets contain a palette, and all widgets in Qt use their palette to draw themselves. This makes the user interface easily configurable and easier to keep consistent. If you create a new widget we strongly recommend that you use the colors in the palette rather than hard-coding specific colors. The color groups: Both active and inactive windows can contain disabled widgets. (Disabled widgets are often called inaccessible or grayed out.) In Motif style, active() and inactive() look the same. In Windows 2000 style and Macintosh Platinum style, the two styles look slightly different. There are setActive(), setInactive(), and setDisabled() functions to modify the palette. (Qt also supports a normal() group; this is an obsolete alias for active(), supported for backwards compatibility.) Colors and brushes can be set for particular roles in any of a palette's color groups with setColor() and setBrush(). You can copy a palette using the copy constructor and test to see if two palettes are identical using isCopyOf(). See also QApplication::setPalette(), QWidget::palette, QColorGroup, QColor, Widget Appearance and Style, Graphics Classes, Image Processing Classes, and Implicitly and Explicitly Shared Classes. Constructs a palette from the button color. The other colors are automatically calculated, based on this color. Background will be the button color as well. See also QColorGroup and QColorGroup::ColorRole. This constructor is fast (it uses copy-on-write). Returns the active color group of this palette. See also QColorGroup, setActive(), inactive(), and disabled(). Examples: themes/metal.cpp and themes/wood.cpp. See also color(), setBrush(), and QColorGroup::ColorRole. See also brush(), setColor(), and QColorGroup::ColorRole. Warning: This is slower than the copy constructor and assignment operator and offers no benefits. Returns the disabled color group of this palette. See also QColorGroup, setDisabled(), active(), and inactive(). Examples: themes/metal.cpp and themes/wood.cpp. Returns the inactive color group of this palette. See also QColorGroup, setInactive(), active(), and disabled(). See also operator=() and operator==(). Returns the active color group. Use active() instead. See also setActive() and active(). Returns TRUE (slowly) if this palette is different from p; otherwise returns FALSE (usually quickly). This is fast (it uses copy-on-write). See also copy(). Returns a number that uniquely identifies this QPalette object. The serial number is intended for caching. Its value may not be used for anything other than equality testing. Note that QPalette uses copy-on-write, and the serial number changes during the lazy copy operation (detach()), not during a shallow copy (copy constructor or assignment). See also QPixmap, QPixmapCache, and QCache. See also active(), setDisabled(), setInactive(), and QColorGroup. See also brush(), setColor(), and QColorGroup::ColorRole. Sets the brush in for color role r in all three color groups to b. See also brush(), setColor(), QColorGroup::ColorRole, active(), inactive(), and disabled(). See also setBrush(), color(), and QColorGroup::ColorRole. Example: themes/themes.cpp. Sets the brush color used for color role r to color c in all three color groups. See also color(), setBrush(), and QColorGroup::ColorRole. See also disabled(), setActive(), and setInactive(). See also active(), setDisabled(), setActive(), and QColorGroup. Sets the active color group to cg. Use setActive() instead. See also setActive() and active(). Writes the palette, p to the stream s and returns a reference to the stream. See also Format of the QDataStream operators. Reads a palette from the stream, s into the palette p, and returns a reference to the stream. See also Format of the QDataStream operators. This file is part of the Qt toolkit. Copyright © 1995-2005 Trolltech. All Rights Reserved.
http://doc.trolltech.com/3.3/qpalette.html
crawl-002
refinedweb
626
53.37
Hi, On 4/9/05, Michael Niedermayer <michaelni at gmx.at> wrote: > Hi > > On Saturday 09 April 2005 16:05, Yartrebo wrote: > > On Sat, 2005-04-09 at 12:32 +0200, Michael Niedermayer wrote: > > > Hi > > > > > > On Saturday 09 April 2005 07:51, Yartrebo wrote: > > > > Sorry, I attached a similarly named but totally different patch. I've > > > > now attached the right patch. > > > > > > [...] > > > > > > > --- libavcodec/snow.c 5 Apr 2005 17:59:05 -0000 1.47 > > > > +++ libavcodec/snow.c 9 Apr 2005 05:03:19 -0000 > > > > > > [...] > > > > > > > +#include "i386/mmx.h" > > > > +#include "i386/snow_mmx_sse2.h" > > > > > > stuff in liavcodec/ should not include stuff from libavcodec/i386 or > > > other processor specific directories > > > > okay, if you like it that way. Could you tell me how to do the run-time > > code selection and how to place it in dsputil? > > just look at libavcodec/{,i386/}dsputil*.{c,h}, its very simple > theres a DSPContext with function pointers, which get set to the specific > implementation like plain c or mmx > the actual implementations should also be in their own files as dsputil*.c is > already quite big (snow_dsp.c and snow_dsp_mmx.c might be possible names) > > > > > > > > +#ifdef HAVE_MMX > > > > +#define LOG2_OBMC_MAX 8 > > > > +#else > > > > #define LOG2_OBMC_MAX 6 > > > > +#endif > > > > > > same as above, HAVE_MMX/HAVE_ALTIVEC should be avoided in files in > > > libavcodec/, there are exceptions where its ok, but here i dont see why > > > the prozessor specific code shouldnt be entirely separate, please also > > > keep in mind that mmx/sse availability is decided at runtime and not > > > compiletime, and that the optimized code should be called through dsputil > > > and why not simply set LOG2_OBMC_MAX to 8 for c too? > > > > LOG2_OBMC_MAX is kept to 6 for the C because you wanted the minimum bits > > possible for the general implementation. > > i just wanted to keep the constants fit into 6bits not necessary keep the > actual table unchanged, i have no problem with multiplying them with 4 or > 1024 or whatever as long as it doesnt change the bitstream / we can reverse > it if its slower on some cpu, without breaking compatibility > > > [...] > > > > + > > > [...] > -- > Michael > > "nothing is evil in the beginning. Even Sauron was not so." -- Elrond Yartrebo, did you make some progress on those patches lately? If so, could you maybe post the latest version of them so that they get a chance to be examined, fixed, and maybe committed? Guillaume -- Reading doesn't hurt, really! -- Dominik 'Rathann' Mierzejewski
http://ffmpeg.org/pipermail/ffmpeg-devel/2005-November/004363.html
CC-MAIN-2018-05
refinedweb
392
62.68
Difference between API and ABI API: Application Program Interface This is the set of public types/variables/functions that you expose from your application/library. In C/C++ this is what you expose in the header files that you ship with the application. ABI: Application Binary Interface This is how the compiler builds an application. It defines things (but is not limited to): - How parameters are passed to functions (registers/stack). - Who cleans parameters from the stack (caller/callee). - Where the return value is placed for return. - How exceptions propagate. I am new to linux system programming and I came across API and ABI while reading Linux System Programming. Definition of API : An API defines the interfaces by which one piece of software communicates with another at the source level. Definition of ABI : Whereas an API defines a source interface, an ABI defines the low-level binary interface between two or more pieces of software on a particular architecture. It defines how an application interacts with itself, how an application interacts with the kernel, and how an application interacts with libraries. How can a program communicate at a source level ? What is a source level ? Is it related to source code in anyway? Or the source of the library gets included in the main program ? The only difference I know is API is mostly used by programmers and ABI is mostly used by compiler. Let me give a specific example how ABI and API differ in Java. An ABI incompatible change is if I change a method A#m() from taking a String as an argument to String... argument. This is not ABI compatible because you have to recompile code that is calling that, but it is API compatible as you can resolve it by recompiling without any code changes in the caller. Here is the example spelled out. I have my Java library with class A // Version 1.0.0 public class A { public void m(String string) { System.out.println(string); } } And I have a class that uses this library public class Main { public static void main(String[] args) { (new A()).m("string"); } } Now, the library author compiled their class A, I compiled my class Main and it is all working nicely. Imagine a new version of A comes // Version 2.0.0 public class A { public void m(String... string) { System.out.println(string[0]); } } If I just take the new compiled class A and drop it together with the previously compiled class Main, I get an exception on attempt to invoke the method Exception in thread "main" java.lang.NoSuchMethodError: A.m(Ljava/lang/String;)V at Main.main(Main.java:5) If I recompile Main, this is fixed and all is working again. this is my layman explanations: - api - think includefiles. they provide programming interface - abi - think kernel module. when you run it on some kernel, they has to agree how to communicate without include files, ie as low-level binary interface
http://code.i-harness.com/en/q/39bec5
CC-MAIN-2018-26
refinedweb
497
56.86
This section lists some best practices for creating a circuit that performs well on Google hardware devices. This is an area of active research, so users are encouraged to try multiple approaches to improve results. This guide is split into three parts: - Getting your circuit to run - Making it run faster - Lowering error Getting a circuit to run on hardware In order to run on hardware, the circuit must only use qubits and gates that the device supports. Using inactive qubits, non-adjacent qubits, or non-native gates will immediately cause a circuit to fail. Validating a circuit with a device, such as cirq_google.Sycamore.validate_circuit(circuit) will test a lot of these conditions. Calling the validate_circuit function will work with any device, including those retrieved directly from the API using the engine object, which can help identify any qubits used in the circuit that have been disabled on the actual device. Using built-in optimizers as a first pass Using built-in optimizers will allow you to compile to the correct gate set. As they are automated solutions, they will not always perform as well as a hand-crafted solution, but they provide a good starting point for creating a circuit that is likely to run successfully on hardware. Best practice is to inspect the circuit after optimization to make sure that it has compiled without unintended consequences. import cirq import cirq_google as cg # Create your circuit here my_circuit = cirq.Circuit() # Convert the circuit onto a Google device. # Specifying a device will verify that the circuit satisfies constraints of the device # The optimizer type (e.g. 'sqrt_iswap' or 'sycamore') specifies which gate set # to convert into and which optimization routines are appropriate. # This can include combining successive one-qubit gates and ejecting virtual Z gates. sycamore_circuit = cg.optimized_for_sycamore(my_circuit, new_device=cg.Sycamore, optimizer_type='sqrt_iswap') Running circuits faster The following sections give tips and tricks that allow you to improve your repetition rate (how many repetitions per second the device will run). This will allow you to make the most out of limited time on the device by getting results faster. The shorter experiment time may also reduce error due to drift of qubits away from calibration. There are costs to sending circuits over the network, to compiling each circuit into waveforms, to initializing the device, and to sending results back over the network. These tips will aid you in removing some of this overhead by combining your circuits into sweeps or batches. Use sweeps when possible Round trip network time to and from the engine typically adds latency on the order of a second to the overall computation time. Reducing the number of trips and allowing the engine to properly batch circuits can improve the throughput of your calculations. One way to do this is to use parameter sweeps to send multiple variations of a circuit at once. One example is to turn single-qubit gates on or off by using parameter sweeps. For instance, the following code illustrates how to combine measuring in the Z basis or the X basis in one circuit. import cirq import sympy q = cirq.GridQubit(1, 1) sampler = cirq.Simulator() # STRATEGY #1: Have a separate circuit and sample call for each basis. circuit_z = cirq.Circuit( cirq.measure(q, key='out')) circuit_x = cirq.Circuit( cirq.H(q), cirq.measure(q, key='out')) samples_z = sampler.sample(circuit_z, repetitions=5) samples_x = sampler.sample(circuit_x, repetitions=5) print(samples_z) # prints # out # 0 0 # 1 0 # 2 0 # 3 0 # 4 0 print(samples_x) # prints something like: # out # 0 0 # 1 1 # 2 1 # 3 0 # 4 0 # STRATEGY #2: Have a parameterized circuit. circuit_sweep = cirq.Circuit( cirq.H(q)**sympy.Symbol('t'), cirq.measure(q, key='out')) samples_sweep = sampler.sample(circuit_sweep, repetitions=5, params=[{'t': 0}, {'t': 1}]) print(samples_sweep) # prints something like: # t out # 0 0 0 # 1 0 0 # 2 0 0 # 3 0 0 # 4 0 0 # 0 1 0 # 1 1 1 # 2 1 1 # 3 1 0 # 4 1 1 One word of caution is there is a limit to the total number of repetitions. Take some care that your parameter sweeps, especially products of sweeps, do not become so excessively large that they overcome this limit. Use batches if sweeps are not possible The engine has a method called run_batch() that can be used to send multiple circuits in a single request. This can be used to increase the efficiency of your program so that more repetitions are completed per second. The circuits that are grouped into the same batch must measure the same qubits and have the same number of repetitions for each circuit. Otherwise, the circuits will not be batched together on the device, and there will be no gain in efficiency. Flatten sympy formulas into symbols Symbols are extremely useful for constructing parameterized circuits (see above). However, only some sympy formulas can be serialized for network transport to the engine. Currently, sums and products of symbols, including linear combinations, are supported. See cirq_google.arg_func_langs for details. The sympy library is also infamous for being slow, so avoid using complicated formulas if you care about performance. Avoid using parameter resolvers that have formulas in them. One way to eliminate formulas in your gates is to flatten your expressions. The following example shows how to take a gate with a formula and flatten it to a single symbol with the formula pre-computed for each value of the sweep: import cirq import sympy # Suppose we have a gate with a complicated formula. (e.g. "2^t - 1") # This formula cannot be serialized # It could potentially encounter sympy slowness. gate_with_formula = cirq.XPowGate(exponent=2 ** sympy.Symbol('t') - 1) sweep = cirq.Linspace('t', start=0, stop=1, length=5) # Instead of sweeping the formula, we will pre-compute the values of the formula # at every point and store it a new symbol called '<2**t - 1>' sweep_for_gate, flat_sweep = cirq.flatten_with_sweep(gate_with_formula, sweep) print(repr(sweep_for_gate)) # prints: # (cirq.X**sympy.Symbol('<2**t - 1>')) # The sweep now contains the non-linear progression of the formula instead: print(list(flat_sweep.param_tuples())) # prints something like: # [(('<2**t - 1>', 0.0),), # (('<2**t - 1>', 0.18920711500272103),), # (('<2**t - 1>', 0.41421356237309515),), # (('<2**t - 1>', 0.681792830507429),), # (('<2**t - 1>', 1.0),)] Improving circuit fidelity The following tips and tricks show how to modify your circuit to reduce error rates by following good circuit design principles that minimize the length of circuits. Quantum Engine will execute a circuit as faithfully as possible. This means that moment structure will be preserved. That is, all gates in a moment are guaranteed to be executed before those in any later moment and after gates in previous moments. Many of these tips focus on having a good moment structure that avoids problematic missteps that can cause unwanted noise and error. Short gate depth In the current NISQ (noisy intermediate scale quantum) era, gates and devices still have significant error. Both gate errors and T1 decay rate can cause long circuits to have noise that overwhelms any signal in the circuit. The recommended gate depths vary significantly with the structure of the circuit itself and will likely increase as the devices improve. Total circuit fidelity can be roughly estimated by multiplying the fidelity for all gates in the circuit. For example, using a error rate of 0.5% per gate, a circuit of depth 20 and width 20 could be estimated at 0.995^(20 * 20) = 0.135. Using separate error rates per gates (i.e. based on calibration metrics) or a more complicated noise model can result in more accurate error estimation. Terminal Measurements Make sure that measurements are kept in the same moment as the final moment in the circuit. Make sure that any circuit optimizers do not alter this by incorrectly pushing measurements forward. This behavior can be avoided by measuring all qubits with a single gate or by adding the measurement gate after all optimizers have run. Currently, only terminal measurements are supported by the hardware. If you absolutely need intermediate measurements for your application, reach out to your Google sponsor to see if they can help devise a proper circuit using intermediate measurements. Keep qubits busy Qubits that remain idle for long periods tend to dephase and decohere. Inserting a Spin Echo into your circuit onto qubits that have long idle periods, such as a pair of involutions, such as two successive Pauli Y gates, will generally increase performance of the circuit. Be aware that this should be done after calling cirq_google.optimized_for_sycamore, since this function will 'optimize' these operations out of the circuit. Delay initialization of qubits The |0⟩ state is more robust than the |1⟩ state. As a result, one should not initialize a qubit to |1⟩ at the beginning of the circuit until shortly before other gates are applied to it. Align single-qubit and two-qubit layers Devices are generally calibrated to circuits that alternate single-qubit gates with two-qubit gates in each layer. Staying close to this paradigm will often improve performance of circuits. This will also reduce the circuit's total duration, since the duration of a moment is its longest gate. Making sure that each layer contains similar gates of the same duration can be challenging, but it will likely have a measurable impact on the fidelity of your circuit. Devices generally operate in the Z basis, so that rotations around the Z axis will become book-keeping measures rather than physical operations on the device. These virtual Z operations have zero duration and have no cost, if they add no moments to your circuit. In order to guarantee that they do not add moments, you can make sure that virtual Z are aggregated into their own layer. Alternatively, you can use the EjectZ optimizer to propagate these Z gates forward through commuting operators. See the function cirq.stratified_circuit for an automated way to organize gates into moments with similar gates. Qubit picking On current NISQ devices, qubits cannot be considered identical. Different qubits can have vastly different performance and can vary greatly from day to day. It is important for experiments to have a dynamic method to pick well-performing qubits that maximize the fidelity of the experiment. There are several techniques that can assist with this. - Analyze calibration metrics: performance of readout, single-qubit, and two-qubit gates are measured as a side effect of running the device's calibration procedure. These metrics can be used as a baseline to evaluate circuit performance or identify outliers to avoid. This data can be inspected programmatically by retrieving metrics from the API or visually by applying a cirq.Heatmap to that data or by using the built-in heatmaps in the Cloud console page for the processor. Note that, since this data is only taken during calibration (e.g. at most daily), drifts and other concerns may affect the values significantly, so these metrics should only be used as a first approximation. There is no substitute for actually running characterizations on the device. - Loschmidt echo: Running a small circuit on a string of qubits and then applying the circuit's inverse can be used as a quick but effective way to judge qubit quality. See this tutorial for instructions. - XEB: Cross-entropy benchmarking is another way to gauge qubit performance on a set of random circuits. See tutorials on parallel XEB or isolated XEB for instructions. Refitting gates Virtual Z gates (or even single qubit gates) can be added to adjust for errors in two qubit gates. Two qubit gates can have errors due to drift, coherent error, unintended cross-talk, or other sources. Refitting these gates and adjusting the circuit for the observed unitary of the two qubit gate compared to the ideal unitary can substantially improve results. However, this approach can use a substantial amount of resources. This technique involves two distinct steps. The first is characterization, which is to identify the true behavior of the two-qubit gate. This typically involves running many varied circuits involving the two qubit gate in a method (either periodic or random) to identify the parameters of the gate's behavior. Entangling gates used in Google's architecture fall into a general category of FSim gates, standing for Fermionic simulation. The generalized version of this gate can be parameterized into 5 angles, or degrees of freedom. Characterization will attempt to identify the values of these five angles. The second step is calibrating (or refitting) the gate. Out of the five angles that comprise the generalized FSim gate, three can be corrected for by adding Z rotations before or after the gate. Since these gates are propagated forward automatically, they add no duration or error to the circuit and can essentially be added "for free". See the devices page for more information on Virtual Z gates. Note that it is important to keep the single-qubit and two-qubit gates aligned (see above) while performing this procedure so that the circuit stays the same duration. For more on calibration and detailed instructions on how to perform these procedures, see the following tutorials:
https://quantumai.google/cirq/google/best_practices
CC-MAIN-2021-31
refinedweb
2,190
54.42
I would like to name a function draw in multiple modules. (My graph theory module would have a draw from drawing graphs, my hyperbolic geometry module would have a draw for drawing things in the hyperbolic plane, and so forth.) At times, I’d like to have a few modules loaded at the same time, but Julia tells me that the name is already taken. I thought, thanks to multiple dispatch, that as long as the argument types are different, this is possible. But I’m having trouble. Suggestions on how to achieve this please? Same function name in multiple modules? I would like to name a function You need to extend the draw method in the same way as you extend Base.getindex for example (prefix with module name to the module that defined the actual function that the rest of the modules extend). If all methods that happened to have the same name automatically got merged, that would be kinda chaos. Two options: If you want them to be completely unrelated functions that happen to share the name draw, just use the qualified names: import A, B A.draw() B.draw() On the other hand, if they are supposed to be part of a family of related functions, so that you can have functions from module C that act on objects from A or B generically, e.g. a function foo(x) = (dosomething(x); draw(x)) that can act on an x from A or B, then the modules have to know about one another. In this case, as @kristoffer.carlsson says above, you need to extend a common draw function in A and B: module A using GenericDraw # first module where draw is defined ... GenericDraw.draw(x::Atype) = ... end module B using GenericDraw # first module where draw is defined ... GenericDraw.draw(x::Btype) = ... end Then both A and B are defining different methods of the same draw function, dispatched by types defined in those modules. Thanks, but I’m not understanding. Could I have a Master module that just defines: function draw() end And then modules A, B, and so on with import Master: draw function draw(x::Atype) .... end in module A, and likewise in B and C. Or do I have no choice but to use A.draw(...) and B.draw(...) and so on. I thought with multiple dispatch I could have lots and lots of functions named draw so long as their arguments were different types. Very helpful. I think I’ve got it. THANK YOU! Yes, you could do that. A generic function has some “higher level concept” that we extend with other types. For example, let’s look at the docstring for push! help?> push! search: push! pushfirst! push pushfirst pushdisplay push!(collection, items...) -> collection Insert one or more items at the end of collection. If you extend Base.push! then you opt into the “contract” that is specified by the function docstring. By doing so we can write generic code that works for many types of collections. However, let’s say you are writing a game or something where you have a method called push! which pushes another player. Then you should not extend Base.push! because this function has a completely different meaning. There is no way to write generic code with Base.push! and your game version of push!. So presumably, your draw function in the “Master” module has some higher level concept of drawing associated with it. Then you extend that function using Master.draw with other types that agree to that concept and we can write generic code using that draw function.
https://discourse.julialang.org/t/same-function-name-in-multiple-modules/16881
CC-MAIN-2018-47
refinedweb
605
74.19
Final Project - TIKTOK Final Project Software Adobe Illustrator, for 2D modeling, armband design Fusion360, for 3D modelling, body and cover design Ultimaker Cura, for printing body Prusa slicer, for printing cover Laser Cutter 5.3 - the software to format illustrator file to laser cutter code For prototypes: KiCad, open source electronics design tool suite. Mods, control software for the Roland mill PartWorks 3D (software for translating mold design to G-code for the Shopbot) Shopbot controll software (for sending the G-code to the Shopbot) Arduino IDE (Simple IDE to program especially Arduino based microcontrollers) AVRDude (the AVR programming application used and integrated in the Arduino IDE) Android Studio, for writing the Android app Hardware Prusa i3 MK3S MMU2S, for cover 3D print Ultimaker 2+, for body 3D print Laser cutter, for leather arm band Protomat S62, mill for PCB Multimeter, for checking traces Knife, for cutting wires Anti static tweezers, for electrical components Solder iron, for soldering components Solder tin, for SMD components soldering Various surface mounted components (see BOM and PCB design files) A regulated digital controlled DC power supply, to test initial power usage Macbook Pro 15 inch, 2015 edition, for programming board and Android app ISP programmer, USBtiny, made in week 05 A USB to TTL controller. (also know as a FTDI controller) Motorola Moto G 2nd edition, for Android application Micro usb cable, for uploading code to Android application For prototypes: Shopbot, for milling mold Wax, for first mold Smooth cast, for final cast PCM, for flexible mold cast Roland Modella MDX-20 mill , for milling prototypes with the mill bit for traces - mini end mills two flutes 0.40 mm 826 (damen.cc) with the mill bit for cutout - two flutes 0.8 mm License (c) Joey van der Bie, Amsterdam University of Applied Sciences, 2019-06-18 This work may be reproduced, modified, distributed, performed, and displayed for any purpose, but must acknowledge “Joey van der Bie, Amsterdam University of Applied Sciences project TIKTOK”. Copyright is retained and must be preserved. The work is provided as is; no warranty is provided, and users accept all liability. Files TIKTOK micocontroller code (Arduino) with accelerometer, vibration and BLE TIKTOK Android application Illustrator arm band design Laser arm band instruction code Smart watch case and cover Fusion360 Ultimaker file watch case Gcode Ultimaker watch case Cover stl Cover Gcode Prusa Materials - prices are from Fabacademy partlist if provided, otherwise via link or direct purchase price. A haptic smart band for people with a visually impairment. Personal Internet of things devices are becoming more common (e.g. smarthome, health, sports). The accessibility of these devices is often low, user groups as visually impaired are not supported or even considered when the hardware and software is designed and implemented. While Apple devices do support accessibility via Voice-over this is not always implemented in the apps. Still when implemented it results in the VI having to rely on the text-to-speech interface. The TTS/voice interface demands the full attention of the VI when using the device, making it very difficult to interact with other people while interaction with the device. This demand on voice interfaces can be greatly reduces by implemented haptic based input and output. Unfortunately Apple Watches do not allow for advanced haptic feedback. We created a smart wearable optimised for delivering haptic in- and output that can be utilised for VI. By starting with presenting notifications via vibrations and input via tap patterns, we believe we can create a small, affordable and flexible wearable that is easy to use. We now present the platform that can recognise taps, vibrates and can send these events from and to the smartphone. It can be further extended and integrated for specific use cases. For example, with a specific vibration pattern the current time can be communicated via vibrations, or an indication can be given if the lights in the home are on or off. Requirements - Comfortable wearable form factor - Recognise tap gestures via accelerometer - Vibrations as output and feedback - Bluetooth connected to phone (Android application) - LED indicators for debugging - Headers for debugging - Uses (rechargeable) battery - Extendable Android Smartphone application to send vibrations and receive taps Initial sketches of accessible smartwatch concept Casing From the start of the Fabacademy course I knew I wanted to create a watch body. In Week 1: Concept I sketched the look and feel, in week 3: Computer Aided Design when experimenting with Fusion360 I created the basic design. I further refined my design in week 10: Molding and Casting and casted my first casing. It was not perfect, I made many errors along the way, and I find the molding and casting process to be time consuming and having many potential fail moments. To reduce the posibility of error, save time and try out something different I wanted to also try 3D printing the design. Also with 3D printing I could add even more design feature that were not possible with the mold. 3D print body with Ultimaker 2+ For my final body design I opened my molding design and changed the width parameter to fit my new board size. Fusion360 automatically updated the size of my bodies to the new parameters. I added a mount for the on/off switch. The datasheet provided the dimension, that I copied to my parameters in Fusion360. I started on adjusting my button/switch holder, I extracted the rectangle from my sketch, than wanted to make a hollow form out of my new cube. Then something happened: TRIAGE! I had reached the time to start my 3D print. It was Friday and around 1400. If I really wanted to print a case that day, I had to finish my design in a few minutes. I had to drop my holder for the switch, and just print the design I created earlier. After this painfull decision, and opened my manual on how to 3D print from week 6: 3D Scanning and Printing. I took the Ultimaker 2+, exported my design from Fusion to Cura, and copied my specifications from my previous description. With an infill of 20% it will the Ultimaker take about 1,5 hours, not bad. I added a build plate and supports. The build to prevent the print from moving, and the supports for my small cutout I made at the bottom for the armband. I switched filament from black to white and performed a first test print of a small servo arm of our group machine to make sure there was no more black in the nozle, and I had the device properly calibrated. After the servo arm was finished I loaded my case design. After 1,5 hours it was finished. And the result looked nice, besides that one of the arm band mounts was 2 mm off! I checked my design, and just before exporting I accidentally moved the mount with my mouse! I fixed my mistake, reexported to cura and exported to the Ultimaker. This time printing went succesfull. One thing that still went wrong was that the support was fused to my small cutout for the armband. I accepted this fail, since the other parts of the body were perfect, and you wont see this mistake when wearing the watch. 3D print cover with prusa After having a working body, I found an extra hour to design a cover for my body. Although I tried to keep it simple it became complex fast. First I started by extracting the circle of my watchsketch, and them adding about 0.1 mm to make sure my cover would be a little bit larger than the body. I noticed my electronics was allready sticking out of the watch case, so I made my top 7mm high to make sure all the electronics would fit. Next I made a shell of my cilinder, but keeping the top solid. To make sure the whole cilinder would not slide over my watch case, I had to ad some form of legs inside my cover to stop at the edge of my watch case. I decided to extract another cilinder from my watch circle from the sketch, but only making it 1 mm high and then making a shell out of it off about 1 mm width. So I ended up with a small ring that would fit exactly on top of my watch cover. I combined the ring and cover on about a 1 mm height, giving me the stop at the edge of my watch inside my cover. For esthetics reasons I extracted the TIKTOK markings . and , from the sketch and combined them with the cover by extracting them from a fraction of a mm of the top of the cover. Last I extracted also two small cilinders of 1 mm each right above where my LEDs of my microcontroller board are. Maybe I can connect some form of light guidance to this in the future, for example with glass fiber. But that will be after the Fabacademy course due to time constraints. I exported the cover to an STL, and imported it in a new piece of software Prusa Slicer Due to the Ultimaker not being available I had to switch to the Prusa i3 MK3S MMU2S. This is the improved prusa MK3S, having a module that allowes to print with different filament at the same time! I am not using different filament, I just want to quickly print this cover with one color. But the new module does mean I have to use Prusa Slicer instead of Cura. If provides pretty much the same settings options as Cura, only with the addition of the extra filament. Also it has a beginner mode, making it easy for me to use. I used the following settings: - filament: Prusament PLA (red) - printer: Prusa i3 MK3S MMU2S Single - supports: supports on build plate only - brim: yes - infill 15% I cleaned the board, loaded the filament in the first hole of the new module (the Prusa auto graps the filament, when you load in the hole), and started the print. As with the Ultimaker the first print failed, now the brim was wrapping upwards. I cancelled the job and started a new one again. Now everything went fine and after an hour I had a big red cover, and trying to fit it on the watch case made a beautifull snapping sound! Creating a leather arm band After discussing with Micky, I really wanted to create my own leather arm band for my TIKTOK watch. I experimented with two small pieces of leather (thank you Micky) to create the TIKTOK arm band strap. In Adobe Illustrator I designed the armband. Looking at similar bands, I decided my dimensions. Dimensions: - lenght: 420 mm - width: 22 mm - holes length: 120 mm - holes diameter/width: 2.2 mm I created a rectangle of 420 by 22 mm, and modified the top corner with the Direct Selection tool (white pointer) to give it a round finish. Next I drew some circles at one side for the holes for the pin of the arm band. And I created a small rectangle at the other side of the rectangle for the position of the pin. Last I though it would be nice to decorate the band with engravings of the TIKTOK logo, so I places a few logo’s on it. Note that when you want to laser cut text, you should first make vectors out of the text. This also saves you hasle with transfering the design to another computer that does not have the font you use. You can create a vector out of you text by right clicking your text and selecting the option, create outlines. Next to the band, I created two small test designs, as you can also see in the picture. I saved the Illustrator file as .ai and .dxf and copied them to the laser computer. I took my laser manual I made in week 4 and started the laser software, and imported my design. I selected the test cut piece and gave each item a color, and assigned to each color different settings. Since I had not cut with leather before, I did not know the correct laser speed and power combination. I had 5 circles, and for each one I selected a different cutting setting. For example I started with: - speed: 500, power: 10 - speed: 400, power: 10 - speed: 300, power: 10 - speed: 200, power: 10 - speed: 100, power: 10 I worked my way up to a higher power, and ended up selecting the cutting for speed: 300 and power 30. Next I tried engraving with the same systematic approach, using the circles as a start. After finding a nice engraved circle I tried to engrave the TIKTOK logo on the test piece. I ended up selecting for the engraving: - speed: 100, power: 20 Higher powers looked to burned for my tast, and higher speeds actually spoiled some engraving outside the desired area. Last I tried the cutout settings around my engraving to cutout the test piece. I found that my previous selected power and speed combination did cut through the material, but left some thin lines of fabric connected, giving a rough uneven look when releasing the piece from the leather. After some trying with larger speeds, I concluded I had to use: - speed: 80, power: 50 I think this big difference between a larger square and a small circle has to do with the laser slowing down when reachting a corner, and needing a certain lenght to get up to speed. The circle was simply to small for the laser to actually reach the speed 300, but with the bigger square I was able to reach this speed. After finding the correct settings, which took me 1,5 hours I had to try on my actual design. And… Succes! While I only needed one, I made 3 pieces to have some spares. Time management. I find this task a really nice example of my planning and triage, I had to keep my design simple to finish in time and almost finished exactly at the desired time. The leather band is a nice addition, but should take me to much time. Therefor I allowed me to take 5 hours to make the band. - 2 hours design - 2 hours cutting - 1 hour finishing Also I would save this activity for the final days, since, if I could not find the time, I would make a sticker on the vynil cutter. Thirsday I had to make my band on the laser cutter. From 13:00 the device was reserved, so I had to finish it before that time. I decided to take the time from 9.30 till 11.00 to design the band and test cuts in Illustrator, and cut from 11:00 till 13:00 different band versions. I finished my cut at 13.15. PCB Board design The board for my smarband builds on the knowledge I gained in the past weeks. In week 5 I learned how to mill a board. In week 7 , week 11 and week 12, I learned how to design a board with input and output devices. Further in these weeks I evolved my design towards my smart band board design. With each new board I added functionality I needed for my final project. In week 14 I further explored the capabilities of my board by adding a Bluetooth module over Serial. For my final board design I want to create a round board with a onboard Bluetooth module, and expansion headers for an I2C accelerometer and serial communication. Further I want it to have an on/off button, indicator LEDs, and the possibility for adding a battery module. I want to make the board as small as possible. I want to try to use both sides of the board as much as possible. To get animpression of how the board will look I stacked all my modules on top of eachother My first sketch I made in week 1 still resembles these requirements. Now understanding better what components I am able to use, I could make my new board. EAGLE Having decided I want to make a double sided board, I also want to try out a new design tool. I installed Autodesk EAGLE and downloaded the FabAcademy components library. You can import the files by copying them to “$HOME/Documents/EAGLE/libraries”. Next, use the “File->New” menu to start a new project and create a new schematic. I started by downloading the Satshakit schematics. The Satshakit is an Arduino Uno compatible board design, based on the Atmel Atmega 328p. Defining components, connecting components, linking wires, all goes pretty much the same as in KiCad. The biggest difference is that in KiCad the options are on the right, in EAGLE on the left. For some advice on how to use EAGLE, I searched around on the Fabacademy website and got some usefull tips from Jimena Galvesparedes. I modified the Satshakit by making it a 3.3V board, adding the RN4871 Bleutooth chip and the vibration motor circuit. Bluetooth board RN4871 The Bluetooth board I will be using for my new board is the Microchip RN4871. It is a very tiny BLE 4.2 board, that allows me to further shrink my design. The datasheet can be found here. On the website of Martyn Currey we find a nice hands-on with the board, plus he explains how to update the firmware of the board. Although the board is part of the FabAcademy inventory, its schematic and footprint are not part of the design files. They can be found for both KiCad as EAGLE on. Next up is connecting the board. I connected the wires as specified in the datasheet. Battery management For connecting and charging the battery with my module I encountered some difficulties. I received a battery charging module from Henk, but that does not provide an output to connect to your board. Also I found that the charging module provided up to 4,7V, while this is to low for my voltage converter to 3.3V. It really was a bummer that I could not use this board for my design and I had to drop onboard battery managment. 3.3V During development, I discovered I used the wrong 3.3V converter. The one I use only supports up to 100mA, while my board will use more due to the combination of the components and the vibration motor. I switched the LM3480IM3-3.3/NOPBCT-ND for the ZLDO1117G33DICT-ND, that will support up to 1A. Its datasheet can be found here The biggest difference between the two components is the footprint: SOT223 instead of SOT23. Next I noticed I did not included a capacitor from the raw voltage to the GND, thus I also included that. MOSFET wrongly connected Despite my efforts in week 12, I still connected my MOSFET wrong in my new board. This resulted in a short circuit, the device kept drawing 2.5V and 1A, instead of 5V and about 0.06A. Milling the board I milled my board on the Protomat S62, a PCB mill we have a the Amsterdam University of Applied Sciences Makerlab. Since I had not used the device before, I decided to first explore its capabilities by making a test cut. While doing this I learned how the device worked, and directly explained two of my students how the device works. Honestly, most of the thinking was done for me, Fabacademy student and college Loes Bogers explored how to use the device and wrote a manual explaining all the steps. Thank you again Loes! Burning the bootloader - Chip not responding When trying to upload the bootlader, as I had specified in my week 9 , I received an error. avrdude: Device signature = 0x000000 (retrying) Error while burning bootloader. Reading | ################################################## | 100% 0.00s This should be ATMEGA328P: 0x1E950F not 0x000000 I analyzed my traces and solder points. Together with Henk I found 2 solder points that were connected to a copper area. This should not result in a short circuit, but still, unwanted. Next I found that I had my MOSFET orientation wrong in the schematic. Last I found that I had my pins reversed. In stead of having the SCK, MISO and MOSI on the left of my headers, they were on the right! To prevent myself from making these errors, I quickly made a layout of pins on paper. I ended up using this piece of paper multiple times over the course of several days. Afer rechecking with AVRDUDE, I could confirm that my chip was detected. I burned the bootloader via the Arduino IDE. After burning the bootloader, I tried to upload a Sketch via the FTDI controller. This resulted in upload errors and AVRDUDE telling me the device is out of sync. To test if the Serial interface could be used at al, I uploaded my SerialTest sketch from week 12 via de SPI interface by selecting the upload via programmer option in Arduino. Now when testing the serial, everything worked, and I knew I had a working chip that I could program va SPI and debug over Serial. Microcontroller code Vibrate In week 9 I explored how to use input devices as buttons and light sensors, in week 12 I explored how to use output devices as the vibration motor. I designed a board with a vibration motor circuit. It is based on the “Learn about Electronics” circuit and translated to my needs and Fabacademy available components. In searching for the components, I stumbled on the FabAcademy website of Silvia Pallazi, she had the exact scheme used from the “Learn about electronics website” and converted the components to the FabAcademy part list. I could not be more lucky! After checking Silvia’s work I decided to use a MOSFET that could handle more Amps. I did made a mistake in my schematic design, I did not realize I had a PNP type MOSFET and not a NPN type and therefor missing a pull-up resistor, als I connected the wrong legs. In my new board I used the same circuit and somehow made the exact same mistakes! I did made 1 important new addition to my design, that is to not drill holes to solder the vibration motor on, but to use headers for the vibration motor. This allows me to quickly switch between motors when I break one. These things tend to break fast. Vibration patterns I tried several different vibration patterns in a test sketch -> Code for vibration pattern tests From these test I determined optimal delay between patterns is 100ms when the pin is high (255) at analogWrite (maximum PWM). Longer than 100ms makes the vibration feel very strong, between 200 and 300ms feels like the duration of smartphone patterns. The lowest possible PWM strenght is analogWrite(50), but I would recommend 60 or 70. When creating a fade-in or fade-out steps should not be bigger than 10, and thus start at 50. A nod pattern is: analogWrite(VIBRATIONPIN, 255); delay(100); analogWrite(VIBRATIONPIN, 0); delay(100); A short nod is: analogWrite(VIBRATIONPIN, 255); delay(50); analogWrite(VIBRATIONPIN, 0); delay(100); Detecting taps In week 11: Input devices I created a prototype of my board with an accelerometer ADXL343. as input device. The ADXL343 is great because of its many build in functions, that you can activate via I2C. I designed it in KiCad. The footprint for the ADXL was not available in the fabacademy library. Therefor I thought about design a footprint myself. But I found that a fellow fabacademy student Ilias Bartolini had designed a footprint for the ADXL343 for KiCad. Further my design contained an ATtiny and the whole board ran on 3.3V using a voltage converter. The accelerometer talks I2C and I made the mistake with my board to not select the correct pins. I fixed it by bridgin the correct pins. Also I was not able to correctly solder the accelerometer, so in the end I never communicated with the accelerometer, and I think I broke it with the heat gun. Next my whole board stopped responding, and I was not able to communicate with it over Serial. For my final board I did not wanted to make the same mistake as in week 11, and I decided to play it save and design the board to use an accelerometer breackoutboard with the MPU-6050 that I also used in week 11: Input devices I wanted to use this board to detect taps. The MPU-6050 does not have dedicated algorithms for detecting taps, therefor we create a basic implementation on the Atmega using a threshold to detect a peak from the raw signal. struct { byte x: 1; int xIntensity; byte y: 1; int yIntensity; byte z: 1; int zIntensity; long last; long current; } peak; ... AcX = abs(AcX); xNorm = (AcX * alpha) + (xNorm * (1 - alpha)); AcX = abs(AcX - xNorm); xBuf[0] = xBuf[1]; xBuf[1] = xBuf[2]; xBuf[2] = AcX; if (xBuf[1] > TAP_MIN_THRESHOLD && xBuf[0]*TAP_SENSITIVITY_LEVEL < xBuf[1] && xBuf[1] > xBuf[2]*TAP_SENSITIVITY_LEVEL) { peak.x = 1; peak.xIntensity = xBuf[1]; peak.current = previousRead; } ... //handle peaks if ((peak.x > 0 || peak.y > 0 || peak.z > 0) && peak.current - peak.last > 50) { //tap detected, send package over serial int intensity = (peak.xIntensity + peak.yIntensity + peak.zIntensity)/3/100; byte pattern[4]; pattern[0] = intensity;//intensity pattern[1] = 0x0A;//duration if ((peak.current - peak.last) > 1000) { pattern[2] = 0x00;//max pause reached } else { pattern[2] = ((peak.current - peak.last) / 10); //pause } pattern[3] = 0xFF;//end bleSerial.write(pattern, 4); Serial.write(pattern, 4); peak.last = peak.current; vibrate(pattern); } //reset peak detection peak.x = 0; peak.xIntensity = 0; peak.y = 0; peak.yIntensity = 0; peak.z = 0; peak.zIntensity = 0; The code loads a measurement of one axis, in this example the X axis. Next it makes the value absolute, making sure we only work with positive values. Then I normalise the value by substracting a running average (the alpha calculation) of the previous values. Having a normalized X value I store it in a buffer with 2 previous measurements. Then there is an if statement with peak detection above a threshold. I compare the 2nd measurement with the first and 3rd measurement and see if it is 8 times higher (TAP_SENSITIVITY_LEVEL). I determined this sensitivity level by tapping a lot of times on the device and next to the device. Funny note, without this sensitivity threshold, I would many many peaks when the washing machine is on that is 5 meters away from the table! So when the 2nd value is higher than the two other values, I count it as a peak, and store this in a struct. After repeating these steps for the Y and Z axis I check the peak struct if a peak is found. When found I calculate the average intensity by combining the 3 axis intensity values. Serial communication In week 9 and week 16 I explored Serial communication with the ATTiny using the software serial library. In our group assingment I explored the Serial lib and the Atmel328p. I designed a board for the Arduino pro mini to controll up to 6 servo’s. To calibrate each servo I wrote a simple servo controll code, that allows you to move (up to 6) servo arms from 0 to 180 degrees using the Serial monitor. To move for example servo arm 1 (0x01) to 160(0xA0) degrees you send the following command over the Serial in HEX: 01 A0 The range between 0 and 180 is 0x00 till 0xB4. Then I extended the protocol with an end bit 0xFF. The 0xFF code is to indicate a message is finished making it easy for the Arduino to process the received data. To move for example servo arm 1 (0x01) to open (0x01) you send the following command over the Serial in HEX: 01 01 FF To close it you send: 01 00 FF Protocol For my new board I wanted to incorporate this protocol, but allow for sending vibration information in small packages. I created a package in a byte array containing my new package protocol: - byte intensity - byte duration, now default 0x0A (15, what translates to 15x10=150 ms) - byte pause, either a value lower than 1000 ms or 0 - byte endbit 0xFF I send this over the software serial to the BLE device, that sends it to the smartphone. Also for debug I send it to my Serial To notify the user, I also vibrate the pattern. Last I reset the peak struct and start the loop over. void serialEvent() { while (Serial.available()) { byte incommingByte = Serial.read(); if (incommingByte == endByte) { if(counter != 0){ //orden the last 3 bytes byte temp[3]; temp[0] = incomming[counter]; counter = ( counter + 1 ) % messageLimit; temp[1] = incomming[counter]; counter = ( counter + 1 ) % messageLimit; temp[2] = incomming[counter]; vibrate(temp); counter = 0; }else{ vibrate(incomming); } }else{ incomming[counter] = incommingByte; counter = ( counter + 1 ) % messageLimit; } } } Testing the Bluetooth board Next up was the testing of the Bluetooth board. My new board had special debug headers for the Bluetooth board, that I connected my FTDI controller to. The board uses 3.3v, so I used a 5v to 3.3v converter to convert the VCC and TX pin of my FTDI controller. I download the “RN4870/71 Bluetooth® Low Energy Module User’s Guide” and looked up the specific serial commands I need. From the manual on page 12 I learned the device is by default a pipe directly transfering received data: The RN4870/71 operates in two modes: Data mode (default) and Command mode. When RN4870/71 is connected to another BLE device and is in Data mode, the RN4870/71 acts as a data pipe: any serial data sent into RN4870/71 UART is trans- ferred to the connected peer device via Transparent UART Bluetooth service. When data is received from the peer device over the air via Transparent UART connection, this data outputs directly to UART. On page 13, we see we can connect over serial using the following settings: - Baud Rate: 115200 - Data Bits: 8 - Parity: None - Stop Bits: 1 - Flow Control: Disabled I copied these settings to CoolTerm and directly got response when starting the device. In the manual on page 15, we find that commands can be send in ASCII and always end with a CR (Cariage Return). To start communicating with the chip over serial you first need to activate “Command mode”. This can be done by sending: $$$ When the device is in Command Mode it sends the response: CMD> Next with the SN command, I set the name to “TikTok”: SN,TikTok When askin the name with the GN command, I received “TikTok” as name. With the D command we get some default settings information of the device: CMD> BTA=D88039FA6F73 Name=TikTok Connected=no Authen=2 Features=0000 Services=00 With V we get the firmware version on the device CMD> RN4871 V1.18.3 4/14/2016 (c)Microchip Technology Inc At the website of Microchip, we find the latest firmware ie 1.30. I am a bit behind. Still we have enough basic functionality to continue without updating the firmware. Android app Inweek 14 I added a Bluetooth board to my microcontroller and was able to communicate with my iPhone and a Serial Bluetooth app with the microcontroller. In week 16 I expanded this functionality by creating a custom app. It is based on the BLEArduino app, but I had to expand its functionality to be able to send and receive data with my board new board. This because in week 16 I used the HM-10 module and I now use the Microchip RN4871. Also I did not refined the user interface. Serial communication over Bluetooth I tested connection with the module, by downloading the Microchip Bluetooth Smart Discovery app With this app I could not connect to the device. I tried to connect with the device via my own Android app created in week16, and this worked! I directly got a connection and saw the connect and disconnect messages in the Cool Term app. However I was not able to send and receive data from my app. To further get my apps working I read on page 59 of the manual, how to get the Transparent UART functionality for BLE working. I had to type the following commands over serial: $$$ + SS,C0 R,1 Now I could see the GATT services, communicate and properly connect with most apps. Only not my Android app, this was because it was only looking for the GATTs from the HM10 device, not the RN4871. I looked up the specific UUID’s on page 65 in the manual and added them to my Android app. public static String RF4871_RX_TX = "49535343-FE7D-4AE5-8FA9-9FAFD205E455"; //the service public static String RF4871_TX = "49535343-1E4D-4BD9-BA61-23C647249616"; // the characteristics public static String RF4871_RX = "49535343-8841-43F4-A8D4-ECBE34729BB3"; // the characteristics Fail with setting the UUID HEX codes for the new GATT At first my changes did not had any effect. Only after a few hours tweeking and trying out different combinations, I found that somehow the HEX codes in Android are case sensitive! This is clearly a bug, HEX codes should be either all upper case or it should not matter. After setting the HEX values to lower case sending of the data worked. public static String RF4871_RX_TX = "49535343-FE7D-4AE5-8FA9-9FAFD205E455".toLowerCase(); Fail, not able to receive values Next, I was not able to receive values from the BLE chip on my app. As I looked at the received services, and read about the service, I thought that maybe I used the wrong approach in Android. I found this great simple tutorial on all about circuits and implemented it in my BLE app. But, when asking the Android Bluetooth service if data was available, I kept receiving the message that this GATT was not available. When I looked at the logs, I noticed I sometimes tried to ask to fast for this data, so before the Bleutooth service was finished with connecting and gathering the available services. Since the Bleutooth service tells us when it is finished gathering BLE services, I used that listener to tell me when I could start gathering data. ... } else if (BluetoothLeService.ACTION_GATT_SERVICES_DISCOVERED.equals(action)) { //enable communication buttons. enableCommunicationButtons(); } ... public void enableCommunicationButtons(){ if(mBluetoothLeService != null) { Log.d(TAG, "GATT Services received, you can start listening"); } } Still the receiving of the data gave no result, but at least I now had no errors. I went back to my old code for my MH10 BLE device and looked at how I received data via that device. I did it not by directly requesting available data, but by setting a notification. Both approaches should work according to the documentation in the Microchip R4871 manual, but I thought that maybe it is an Android thing why it does not work via the new approach. I modified the code to activate the notification. And I started receiving the values. public void readCustomCharacteristic() { if (mBluetoothAdapter == null || mBluetoothGatt == null) { Log.w(TAG, "BluetoothAdapter not initialized"); return; } /*check if the service is available on the device*/ BluetoothGattService mCustomService = mBluetoothGatt.getService(UUID.fromString(SampleGattAttributes.RF4871_RX_TX)); if(mCustomService == null){ Log.w(TAG, "Custom BLE Service not found"); return; } /*get the read characteristic from the service*/ BluetoothGattCharacteristic mReadCharacteristic = mCustomService.getCharacteristic(UUID.fromString(SampleGattAttributes.RF4871_TX)); if(mBluetoothGatt.readCharacteristic(mReadCharacteristic) == false){ Log.w(TAG, "Failed to read characteristic, setting notification"); setCharacteristicNotification(mReadCharacteristic, true); } } /** * Enables or disables notification on a give characteristic. * * @param characteristic Characteristic to act on. * @param enabled If true, enable notification. False otherwise. */ public void setCharacteristicNotification(BluetoothGattCharacteristic characteristic, boolean enabled) { if (mBluetoothAdapter == null || mBluetoothGatt == null) { Log.w(TAG, "BluetoothAdapter not initialized"); return; } mBluetoothGatt.setCharacteristicNotification(characteristic, enabled); if ( UUID.fromString(SampleGattAttributes.RF4871_TX).equals(characteristic.getUuid())) {); } } Note that I kept the CLIENT_CHARACTERISTIC_CONFIG. This is a special descriptor telling the Bluetooth service the BLE service is a custom service, not defined in the BLE documention. Read more about this and GATT atributes on this Oreilly page. After receiving the data I processed it to my new protocol containing 4 bytes: - intensity of the vibration (desired values between 10 and 100) - duration of the vibration - pause after vibration - end bit 0xFF If the received message is a complete message, I search for the end bit (0xFF) and then gather the intensity, duration and pause data to pass it to the next class. private void broadcastUpdate(final String action,final BluetoothGattCharacteristic characteristic) { final Intent intent = new Intent(action); final byte[] data = characteristic.getValue(); if(data != null && data.length >= 4){ for(int i=0; i< data.length; i++){ if(data[i] == (byte)0xFF && i-3 >= 0){ intent.putExtra(EXTRA_DATA, new byte[]{data[i-3],data[i-2], data[i-1]}); } } } sendBroadcast(intent); } At the next class I process it to an Object, that makes more sense for the application and programmer: void handleReceivedDataFromBLE(byte[] data){ VibrationDataPoint vdp = new VibrationDataPoint(data); ... } I designed the VibrationDataPoint class to know about bytes and integers, making it easier to work in Android. public class VibrationDataPoint { private int intensity; //between 10 and 1000 private int duration; //between 10 and 1000ms private int pause; //between 10 and 1000ms public VibrationDataPoint(int intensity, int duration, int pause){ this.intensity = intensity; this.duration = duration; this.pause = pause; } public VibrationDataPoint(byte[] dataPoint){ if(dataPoint.length == 3) { setIntensity(((int)dataPoint[0]& 0xFF)*10); setDuration(((int)dataPoint[1]& 0xFF)*10); setPause(((int)dataPoint[2]& 0xFF)*10); }else{ throw new DataPointInvalidError(); } } ... public byte[] toByteArray(){ return new byte[]{(byte)intensity, (byte)duration, (byte)pause}; } Writing code in Android Studio Interface My app still had the basic interface, so I modified the interface to match with my new TIKTOK brand. I created a theme with MaterialPalette.com and the main colors to match my hardware: red, black, white and grey. <?xml version="1.0" encoding="utf-8"?> <!-- Palette generated by Material Palette - materialpalette.com/red/grey --> <resources> <color name="primary">#F44336</color> <color name="primary_dark">#D32F2F</color> <color name="primary_light">#FFCDD2</color> <color name="accent">#9E9E9E</color> <color name="primary_text">#212121</color> <color name="secondary_text">#757575</color> <color name="icons">#FFFFFF</color> <color name="divider">#BDBDBD</color> </resources> Next I opened the activity_control.xml file, my main screen layout, and placed slider components (Seekbar) for setting the TIK (vibration pattern). To let the components match with my new color theme I created a theme in the values folder defined in the styles.xml Here I gave the app its new colors, and de seekbar and the buttons the correct colors and dimensions. <resources> <!-- Base application theme. --> <style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar"> <!-- Customize your theme here. --> <item name="colorPrimary">@color/primary</item> <item name="colorPrimaryDark">@color/primary_dark</item> <item name="colorAccent">@color/accent</item> </style> <style name="Seekbar" parent="Base.Widget.AppCompat.SeekBar"> <item name="android:maxHeight">5dp</item> <item name="android:splitTrack">false</item> <item name="android:progressDrawable">@drawable/custom_seekbar</item> <item name="android:thumb">@drawable/custom_thumb</item> </style> <style name="Button" parent="Widget.AppCompat.Button"> <item name="android:background">@color/primary</item> <item name="android:textAppearance">@style/textButton</item> <item name="android:minHeight">60dip</item> <item name="android:minWidth">88dip</item> <item name="android:focusable">true</item> <item name="android:clickable">true</item> <item name="android:gravity">center_vertical|center_horizontal</item> </style> <style name="textButton" parent="TextAppearance.AppCompat.Widget.Button"> <item name="android:textColor">@color/icons</item> </style> </resources> Last in the screen editor I assigned the new styles to the specific components. To be able to use the new Seekbar values for our TIK I collected the values in the DeviceControlActivity class before I send the TIK to the microcontroller. public void onClickWrite(View v){ if(mBluetoothLeService != null) { mBluetoothLeService.sendVibrationDataPoint( new VibrationDataPoint(((SeekBar) findViewById(R.id.vibration_intensity)).getProgress()/10, ((SeekBar) findViewById(R.id.vibration_duration)).getProgress()/10, ((SeekBar) findViewById(R.id.vibration_pause)).getProgress()/10)); } } And when the app receives a TOK it displays the result on the screen instead of only in the log files. void handleReceivedDataFromBLE(byte[] data){ VibrationDataPoint vdp = new VibrationDataPoint(data); Log.d(TAG, "Received vibration data= " + vdp.toString()); dataList.append("\nTOK: "+System.currentTimeMillis()+" "+ vdp.toString()); } Result Then I was out of time, I would have likes to implement more features, as an API inteface for other apps, vibration pattern examples as the current time in vibrations, and ofcourse design a new board with onboard accelerometer and battery management. I think that a next iteration I can make the board twice as small, by losing the headers, and adding an onboard accelerometer. I then want to use the ADXL343 again instead of the MPU6050, but I will not make the same mistake as in Week11. I will make a special component board as how my RN4871 is provided by Microchip. This way I can experiment with correctly soldering the accelerometer, without risking to destroy my traces of my board and having to create a new board.
http://fabacademy.org/2019/labs/waag/students/josephus-vanderbie/project/index.html
CC-MAIN-2022-33
refinedweb
6,851
61.97
I am very new to the C++programming, and so I keep facing the below error saying: Reference to overloaded function could not be resolved; did you mean to call it? Below is my code that is causing me all sorts of troubles : #include <stdio.h> #include <string> using namespace std; int main() { string myname; string myage; cout << "Enter the name and age: "; cin >> myname >> myage; cout << "Hello, " << myname << ", are you " << myage << " years old?\n"; return 0; } I am currently using Xcode on my Mac OS X Mojave. I have also noticed that if I have only the current code, then it works very fine, but when I try to have multiple files, all of the files fail to work. Can anyone explain me why my code is failing and can be the solution for it? Solution : In your code the stdio.h does not define the std::cin and std::cout. That header only defines your C functions for the input and the output, like your printf and scanf. So it is the I/O header, but it is not the one you are looking for. You need to simply include the <iostream> to have the std::cin and std::cout. If you do above simple change your code will start giving you the desired output.
https://kodlogs.com/34704/reference-to-overloaded-function-could-not-be-resolved-did-you-mean-to-call-it
CC-MAIN-2021-21
refinedweb
217
79.3
Empty graph using implicit_plot3d with contour-option Hello everybody, I wanted to make an isosurface plot from a 3d matrix, which contains random values from 0 to 1 at an equally spaced 3d grid. Therefore I generated random numbers, and reshaped them in a 3d matrix. I interpolated the 3d matrix linearly with the RegularGridInterpolator from scipy. To make a 3D plot of it I am using the implicit_plot3d function of sage with a given contour value. I get no errors, but in the end the graph is empty, which should not be in my opinion. Here is my code: from scipy.interpolate import griddata import numpy as np from mpl_toolkits.mplot3d import axes3d import matplotlib.pyplot as plt from matplotlib import cm import scipy.interpolate numbers=np.random.random_sample((1000,)) #generate random numbers #print numbers data = np.reshape(numbers, (10, 10, 10)) #reshape the numbers on a 3D matrix #print data xi = yi = zi = np.linspace(1, 10, 10) interp = scipy.interpolate.RegularGridInterpolator((xi,yi,zi), data) #interpolate the 3d matrix with a function #print xi var('x,y,z') #test = implicit_plot3d(interp,(x,1,10),(y,1,10),(z,1,10),contour=0.5) test = implicit_plot3d(interp==0.5,(x,1,10), (y,1,10),(z,1,10),plot_points=60, color='seagreen') test.show() #plot the the function at a certain value #a=(5.,5.,2.) #interp(a) Any ideas on that? Thank you very much!
https://ask.sagemath.org/question/38741/empty-graph-using-implicit_plot3d-with-contour-option/
CC-MAIN-2019-13
refinedweb
238
52.56
GPSID (the GPS abstraction APIs introduced in Windows Mobile 5) has a few known issues (aka bugs). This is rather complicated due to versioning issues and also because of a C# wrapper library for the API's that work around the bug, which may or may not exist on certain platforms. The base issue is that when GPSID shipped to our OEMs in Windows Mobile 5, it returned longitude and latitude in a very bad format. (Raw from the NMEA stream and not in reasonable degrees). It was possible to compute a reasonable representation of lat/long from what GPSID returned, but it was not reasonable to have every ISV in the world do this. So before our OEMs shipped any devices (it takes a few months for them to get ready) we released a fix to GPSID so it returned reasonable values and it was put on all actual PocketPC + SmartPhone devices. This should have been the end of the story. Unfortunately there are other factors here. Another MS developer created a C# wrapper library for GPSID that just P/Invoked the GPSID API's. He dealt with the bad values that GPSID was returning pre-fix and converted them to be reasonable for the app. Unfortunately, the C# APIs were not fixed to remove this workaround when GPSID was fixed. So currently they are doing a conversion of the correct lat/long values, which means they're returning bogus data now. Further complicating this is that while the GPSID fix made it to PocketPC + SmartPhone ROMs, it did not make it to the device emulator. So that means on the emulator the C# piece actually does work correctly because it's still fixing valid data, but using the C APIs will give invalid results. Summary:GPSID with C APIs: Emulator STATUS: Will not work (GPSID bug not fixed) WORKAROUND: Call GetDegreesFromAngular() (see below) on lat + long values. PocketPC/SmartPhone (real device) STATUS: Will work (GPSID bug was fixed) WORKAROUND: Not needed GPSID with C# APIs: Emulator STATUS: Will work (GPSID bug not fixed, C# work around for issue in place) WORKAROUND: Not needed PocketPC/SmartPhone (real device) STATUS: Will not work (C# API working around a bug that no longer exists) WORKAROUND: Remove the C# code (it was shipped as source) that does the unneeded conversion. We're very likely to get the C# wrapper fixed in a future release. Fixing GPSID on the emulator is under investigation. You can tell if underlying GPSID is fixed simply by making sure that the lat/long values it returns are what they're supposed to be for whatever the GPS driver is telling it. As promised above, the conversion function you need ONLY on emulator and using C API lat/long (note I haven't tested this as I'm describing below but it should work) DOUBLE GetDegreesFromAngular(DOUBLE dbl) { dbl /= 100; int i = (int) dbl; DOUBLE frac = (dbl - (double)i) * (100.0 / 60.0); return ((double)i + frac);} I apologize for all the problems this caused. The fault here is totally mine, and not the C# developer. Thanks to Christopher E Piggott of Rochester Institute of Technology and Trapulo "of Italy" (sorry I don't know your real name) for bringing all the C# + emulator issues to my attention and testing out some theories I had. [Author: John Spaith]
http://blogs.msdn.com/b/cenet/archive/2006/02/10/windows-ce-gpsid-its-c-wrapper-and-device-emulator-interaction-issues.aspx
CC-MAIN-2015-11
refinedweb
560
57.2
Hi all, I want to control leverage of each security and the entire portfolio separately. For example, I trade 2 pairs (4 stocks). Each time of trades , the quantity is changing depending on some factors (hedge ratio and so on.). I want to set the maximum leverage of each pair (combination of 2 stocks in the pair) to 1.0 and entire portfolio to 2.0 separately. When I use 'SetLeverage' method, it sets the maximum leverage for the security and entire portfolio together at the same time, which is not what I want. For example, as below, if I set the leverage of two securities to 1.0 each, both maximum leverage of those two securities and entire portfolio is 1.0. In other words, if I order to buy each security with target percent 1.0 at the same time, only one of them gets filled because the max leverage of the portfolio is limited to 1.0. What I want here is that I want to buy both securities with target percent 1.0 (the total leverage becomes 2.0.). def Initialize(self): # Set the cash we'd like to use for our backtest # This is ignored in live trading self.SetCash(100000) # Start and end dates for the backtest. # These are ignored in live trading. self.SetStartDate(2017,1,1) self.SetEndDate(2017,10,1) # Add assets you'd like to see self.spy = self.AddEquity("SPY", Resolution.Daily).Symbol self.aapl = self.AddEquity("AAPL", Resolution.Daily).Symbol self.Securities["SPY"].SetLeverage(1.0) self.Securities["AAPL"].SetLeverage(1.0) The quantity is supposed to change all the time. It's not like 'SetHoldings("SPY", 1.0)' but like 'SetHoldings("SPY, weight)'. And the maximum leverage for each security (or each pair) should be set to a pre-defined number (e.g. 0.8 or 1.0 or 1.2.). So below the maximum leverage of each security (or pair), we can use the quantity (weight) as we get. However, if the leverage is over the maximum, we should cap the quantity (weight) of the security (or pair). Can anyone help me on this? Thanks :)
https://www.quantconnect.com/forum/discussion/2649/leverage-control/p1
CC-MAIN-2021-31
refinedweb
359
67.35
Hey guys, I am most definitely new here and new to Java. I have been trying to get help with my code skipping executing a specific function when I run the program. The first time through, everything executes perfectly, that is until the loop goes into effect. It skips over allowing the user to input "Employee Name" which is extremely important, because this is the are where the user can input "stop" to exit. Anyone know a different method to use or where a conflict could be occuring to cause this problem? Here's the code: package payrollprogram2; import java.util.Scanner; public class PayrollProgram2 { public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.printf("Welcome to the Employee Payroll System!\n"); String name; double rate; int hours; boolean g = true; while (g == true) { System.out.printf("Enter employee's name. Type 'stop' to exit.\n"); name = input.nextLine(); if(name.equals("stop")) {System.out.printf("Exiting. Good-bye!"); g = false; break;} else { do { System.out.printf("Enter the hourly rate.\n"); rate = input.nextDouble(); if(rate <= 0) System.out.printf("Please enter a positive number (above zero).\n"); }while(rate <= 0); do { System.out.printf("Enter the amount of hours worked (Whole, positive numbers only).\n"); hours = input.nextInt(); if(hours <= 0) System.out.printf("Please enter a positive number (above zero).\n"); }while(hours <= 0); int overtime; if (hours <= 40) { overtime = 0; } else { overtime = hours - 40; } double payroll = hours * rate + overtime * rate/2; System.out.printf("\nEmployee Name: %s\n", name); System.out.printf("Hours Worked: %d\n", hours); System.out.printf("Rate per Hour: $%.2f\n", rate); System.out.printf("Total Income: $%.2f\n", payroll); } } } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/15155-java-programming-loop-execution-problem.html
CC-MAIN-2014-41
refinedweb
283
54.69
Android Mobile Browser - Omega-XXXX.local - Jon Gordon I have a web server set up on my Omega2+ that acts as a user interface which is reachable by the omega-xxxx.local domain on Windows(with bonjour installed)/Mac/iOS when connected to the Omega2+'s WIFI Access Point. As some of you may know, this does not seem to work for Android mobile browsers because, I assume from what I've read, Android doesn't use mDNS. It is still reachable by IP address, just not through the .local namespace. Has anyone found a work around for this so that Omega can be reached by domain name instead of IP on android mobile devices through its browser? Will the AP IP always be 192.168.3.1? Could some kind of Omega2+ DNS trickery be used to force the android browser to resolve the (m)DNS? - Andy Burgess @Jon-Gordon Hi Jon, Not an expert but the AP IP will always be 192.168.3.1 unless you change it. Cheers Andy - Douglas Kryder.
http://community.onion.io/topic/3430/android-mobile-browser-omega-xxxx-local/?page=1
CC-MAIN-2019-35
refinedweb
175
82.34
> I Need A Way To Write A Text File Through Unity In C#, That Saves It To The Inside Of The Application On OS X(By Inside I Mean Package Contents). I Have Tried using System; using System.IO; using UnityEngine; using System.Collections; public class WriteText : MonoBehaviour { void Start() { System.IO.File.WriteAllText("ScurgeOfTheShadows.app/Highscore.txt", "Hello There"); } } Yet It Says It Is An Invalid Directory, Maybe Im Just Not Sure How To Save To The Txt File Inside Of An App, Or Maybe System.IO Is Not What I Should Be Using. All Help Is Appreciated! Answer by Slobdell · Oct 06, 2013 at 03:36 PM I don't think you can use the System.IO methods because the directory is different based on what platform the game is for. You should use a Text Asset and you can read and write from that. You find it by grabbing the filename without the txt extension and using the text property of the asset. The file also needs to be in a folder named "Resources", if you don't already have one just make one. TextAsset asset = (TextAsset)Resources.Load("Highscore"); String textFromFile = asset.text; asset.text = "What you want to write to the file"; I Get Everything Other Than The Resources Folder. Do You Mean The Data(I Think It Was Called Data) Folder That Comes With A Windows Build? You could use System.IO in combination with Application.persistentDataPath. TextAsset is easier, though. Just need to add a folder inside the assets folder called "Resources" and put files in there you want to access in the game. This way when you call Resources.Load Unity will know exactly where to look for your file. It will always know where that folder is, even if you don't because it will be different from Windows to Mac to iOS to Android, etc. Jamora is correct as well. The point also that I didn't mention is that the directory structure in your unity project is not necessarily the same as it is when you build it to an executable application. So what works in the editor will not work when you build the game because the files aren't in exactly the same place. I'm pretty sure you cannot WRITE TO a TextAsset. (or indeed any. I'm using Java Script with import System.IO. 1 Answer 1 Answer Save/Copy a .txt file a runtime 1 Answer Modifying a text file outside unity. 1 Answer .txt file issue, first line is skipped 2 Answers
https://answers.unity.com/questions/549907/write-to-a-text-file-inside-of-the-application.html
CC-MAIN-2019-26
refinedweb
429
75.81
Studio Running the first Hello World Application (Kotlin) Zulfi Khan Ranch Hand Posts: 107 posted 10 months ago Hi, I tried to run hello world program using tutorial kart. The link is: Android Programming usingKotlin Tutorial It does not ask to write any code nor does it ask for altering any built in code. I ran it by connecting both by mobile phone and using emulator. Error log when connected to mobile ph: “Session app” Error installing APK Error log file shows : 12:30 PM Gradle build finished in 21s 332ms 12:33 PM Instant Run is not supported on devices with API levels 20 or lower. (Don't show again) 12:33 PM Executing tasks: [:app:assembleDebug] 12:34 PM Gradle build finished in 1m 33s 345ms 12:34 PM Session 'app': Error Installing APK Error log file when connected to emulator: Error log file while running using an emulator (visual studio android 23 arm ph) 12:34 PM Session 'app': Error Installing APK 12:44 PM Executing tasks: [:app:assembleDebug] 12:44 PM Emulator: emulator: WARNING: UpdateCheck: Failure: Error 12:45 PM Emulator: Process finished with exit code -1073741819 (0xC0000005) 12:46 PM Gradle build finished in 2m 0s 438ms Also another run: 12:57 PM Executing tasks: [:app:assembleDebug] 12:57 PM Emulator: emulator: WARNING: UpdateCheck: Failure: Error 12:57 PM Gradle build finished in 10s 856ms 12:58 PM ADB rejected shell command (getprop): closed 12:58 PM Emulator: Process finished with exit code -1073741819 (0xC0000005) Files: activity_main.xml main_activity.kt package com.example.hp.kotlinandroiddemotk import android.support.v7.app.AppCompatActivity import android.os.Bundle class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) } } Some body please guide me. Zulfi. Pete Letkeman Bartender Posts: 1868 81 I like... posted 10 months ago Looks like from the supplied stacktrace that issue you are experiencing is related to "Instant Run". You do not need to have this feature enabled and some devices do not. You stated that you ran this on both an emulator and a physical device, what versions of Android were you running? You may not be aware of this, but Google supplies free tutorials on Android development which can be found here . Google has supplied not only the instructions on how to get started, but usually the code needed and a functional sample project. You may need to update the project to use newer versions of Gradle, but the process is the same for all of the projects that need it. Google has even provided Kotlin code and project resources. “The strongest of all warriors are these two — Time and Patience.” ― Leo Tolstoy, War and Peace Pete Letkeman Bartender Posts: 1868 81 I like... posted 10 months ago I just noticed this line: Error log file while running using an emulator (visual studio android 23 arm ph) Are you using Visual Studio when following along with the tutorial that you initially posted? If there, there very well could be an issue with Visual Studio and you may get better/different results with Android Studio. If you are choosing to stay with Visual Studio, then which version and edition of Visual Studio are you using? “The strongest of all warriors are these two — Time and Patience.” ― Leo Tolstoy, War and Peace Zulfi Khan Ranch Hand Posts: 107 posted 10 months ago Hi Pete Letkeman, I am using Android Studio 3.3. I am getting emulator error: 0xc0000005 which is discussed here How to fix application error 0xc0000005 I am attaching a picture which would tell you the version. I am also attaching 2 pictures caught during running of application.I am not running visual studio along with android studio but I choose the emulator "visual studio android 23 arm ph android-version_coderanch.jpg Run1-screen.jpg Run2-screen.jpg Zulfi Khan Ranch Hand Posts: 107 posted 10 months ago Hi, Visual studio's version is VS2017 Zulfi. Pete Letkeman Bartender Posts: 1868 81 I like... posted 10 months ago I'm using the same version of Android Studio, so I do not think that is the issue. Here are a few things that you could try out: - Create a simple "hello world" app using only Java code. - Make sure that your environment variables are set up correctly as noted here . - Do not use Visual Studio's Android images, but use Android Studio's Android images. - Update your Android SDKs, if there is a update to be had that is. - Try different Android versions like 5, 5.1, 6 etc to see if you get the same result. - Some people have experienced issues with anti-virus software including Windows Defender. So try temporarily disabling them while running Android Studio. - How about your system? Over all is it running as it should? Have you recently run a spyware and adware and virus check? This could be part of the problem. - DANGER/WARNING: You could try the non stable channels of Android Studio, but this may cause problems as well. “The strongest of all warriors are these two — Time and Patience.” ― Leo Tolstoy, War and Peace Zulfi Khan Ranch Hand Posts: 107 posted 10 months ago Hi, I was trying to execute the android application at the following link: Install Android Studio and Run Hello World After step 14, it shows a figure which does not match with mine. Is this a problem? What should i do? Kindly guide me. Zulfi. mismatch-with-fig.jpg Zulfi Khan Ranch Hand Posts: 107 posted 10 months ago Hi, Number 3 component is not matching. Zulfi. given-fig.jpg Pete Letkeman Bartender Posts: 1868 81 I like... posted 10 months ago All is fine. In the example provided on the web the person selected the MainActivity class in Android Studio. In the picture provided by you, you have not. Have you tried the HelloWorld app walk through provided by Google which can be found here ? If you are looking for a learning resource you may be interested in know that Google provides a number of tutorials for Android developers for free along with the required source code found here . Were you able to get past the problem which you posted previously regarding Kotlin? “The strongest of all warriors are these two — Time and Patience.” ― Leo Tolstoy, War and Peace Ever since I found this suit I've felt strange new needs. And a tiny ad: Create Edit Print & Convert PDF Using Free API with Java Post Reply Bookmark Topic Watch Topic New Topic Boost this thread! Similar Threads Android HAX issues? how to pass an array of strings as an argument to a web service "Application has unexpectedly stopped error" when adding onClickListener ERROR [Jsr168Dispatcher] Could not execute action head first, very basic servlet More...
https://coderanch.com/t/695632/android-studio/ide/Running-World-Application-Kotlin
CC-MAIN-2019-22
refinedweb
1,130
62.38
Perfect. This works. import random import matplotlib.pyplot as plt N = 10 x_vals = range(N) bit_vals = [-1, 1] my_bits = [] for x in xrange(N): my_bits.append(bit_vals[random.randint(0, 1)]) print my_bits fig = plt.figure() ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) ax.set_ylim(ymin=-2, ymax=2) ax.step(x_vals, my_bits, color=‘g’) ax.grid() ax.set_yticks((-1, 1)) plt.show() ··· ----- Original Message ----- From: sebastian Sent: 08/02/11 11:51 AM To: matplotlib-users@lists.sourceforge.net Subject: Re: [Matplotlib-users] How to create a square wave plot Hi, On Tue, 02 Aug 2011 08:43:47 +0000, Freedom Fighter wrote: > Hi, > > what's the easiest method of creating a square wave plot? > Let's say I have a data stream of bits that have values of "1" or > "-1". > The plot function wants to draw a diagonal line between those points > but I need to have a horizontal line. So to get a "square wave" I > must > insert additional points to the series when the values change. > I wonder if there an easy way of doing this? You can use the "step" function, which does exactly this and has arguments for setting where the step is made, etc. Regards, Sebastian ------------------------------------------------------------------------------ BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA The must-attend event for mobile developers. Connect with experts. Get tools for creating Super Apps. See the latest technologies. Sessions, hands-on labs, demos & much more. Register early & save! _______________________________________________ Matplotlib-users mailing list Matplotlib-users@lists.sourceforge.net
https://discourse.matplotlib.org/t/how-to-create-a-square-wave-plot/15735
CC-MAIN-2021-43
refinedweb
257
70.19
0 hello I'm trying to create program to check for the user password"only numbers" if it's correct then go to the next step. if not just type Invalid password the password should be store in another file which is "password.txt" this is my code but I couldn't make the program check the password from password.txt import pickle f= open ("pin.txt", "w") pickle.dump(1234, f) pickle.dump([1,2,3,4], f) f.close() def passw(): f=open("password.txt", "r") x=pickle.load(f) pin = raw_input ("Please insert your password number: ") while True: if pin >= 1000: print "password accepted" if pin <= 9999: print "password accepted" print if pin == x: print "Valid password" print if pin != x: print "Invalid password" print else: print "Invalid password" passw() thank you in advance
https://www.daniweb.com/programming/software-development/threads/389659/create-program-to-check-for-the-user-password
CC-MAIN-2017-09
refinedweb
137
77.74
WSDL refactoring functionality, feel free to download a ReadyAPI trial from our website.. When you have a situation where the WSDL has updated, right-click on the WSDL interface node and select "Refactor Definition". You will see Refactor definition window open., which will open the following dialog: Specify the Definition URL for the updated WSDL and if you want to create new requests and backup copies for all modified requests. Selecting Next will load and analyze the new WSDL and start the refactoring process by prompting for how to map old operations to new ones (if required) Operations that are not automatically matched (based on their names) are displayed as red; associate these with their new counterparts (if available) by either dragging them onto the new operation or by selecting both and pressing the "Connect" button. If two operations have been incorrectly associated, select them both and press the "Disconnect" button instead. Press Next to move on the message refactoring: This is the main refactoring step, the window is laid out as follows: Changing namespace, reordering elements and changing an element to an attribute or vice versa are refactored automatically and can be reviewed in the top section. Renaming elements and attributes can be mapped manually in the top section. More complex structural changes are currently not handled, but each request and response needs to be edited manually in the bottom section. Red nodes in the left-hand tree indicates messages that could not be refactored automatically, either because the old schema could not be mapped directly to the new schema, or because old messages did not match their old schema. These messages need to be edited manually. Red nodes in the Old Schema tree indicate elements or attributes in the old schema that are missing in the new schema. Please review the requests or mockresponses and review those values. Blue nodes in the New Schema tree indicate elements or attributes in the new schema that did not exist in the old schema. This is a help to find schema changes. The Filter checkbox can be selected to hide the nodes that were not changed and focus on the changes. When all operations and their messages have been resolved, move forward to the next step with the Next button This step prompts you to review all XPath expressions in your project to see if they have been updated correctly. The tree to the left shows all items in the project containing XPath-expressions (assertions, property transfers, etc.). Selecting an XPath in the tree will show its old and new values in the two panels to the right. The New XPath expression can be modified manually if desired If the XPath could not be refactored automatically (because it was invalid or because the schema changes were too complex), it will be red in the left hand tree, and the New XPath text will be empty. Please edit the New XPath manually. If this field is left empty, the XPath will be left unmodified. Select Finish when ready and SoapUI will process your project as configured and update all messages and XPath expressions accordingly.
https://www.soapui.org/docs/soap-and-wsdl/wsdl-refactoring/
CC-MAIN-2020-34
refinedweb
522
57
1 /*2 * Cobertura - *4 * Copyright (C) 2005 Jeremy Thomerson5 *6 * Cobertura is free software; you can redistribute it and/or modify7 * it under the terms of the GNU General Public License as published8 * by the Free Software Foundation; either version 2 of the License,9 * or (at your option) any later version.10 *11 * Cobertura is distributed in the hope that it will be useful, but12 * WITHOUT ANY WARRANTY; without even the implied warranty of13 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU14 * General Public License for more details.15 *16 * You should have received a copy of the GNU General Public License17 * along with Cobertura; if not, write to the Free Software18 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-130719 * USA20 */21 22 package someotherpackage;23 24 /**25 * This class is only for testing that stuff from multiple source26 * directories works properly.27 * 28 * @author Jeremy Thomerson29 */30 public class SomeOtherClass {31 32 private int counter;33 34 public SomeOtherClass() {35 // no-op36 }37 38 public int incrementCounter() {39 return ++counter;40 }41 42 public int decrementCounter() {43 return --counter;44 }45 46 public int getCounter() {47 return counter;48 }49 50 /**51 * Don't call this method. It is one that is supposed to not be called52 * by the unit tests so that we can verify that everything is being53 * recorded properly. 54 */55 public void neverCallThisMethod() {56 throw new UnsupportedOperationException ("You weren't supposed to call this method.");57 }58 }59 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/someotherpackage/SomeOtherClass.java.htm
CC-MAIN-2013-20
refinedweb
264
54.66
/* Assume that a and b point to array with at least length elements */ /* Assume that none of the intermediate values overflows a double. */ ' double dotProduct(double *a, double *b, int length) { double runningSum = 0; for (int index = 0; index < length; index++) runningSum += a[index] * b[index]; return runningSum; }(in LispLanguage) ;; assume that a and b are the same length. ;; assume that corresponding elements of a and b can be multiplied together. ;; (defun dot-product (a b) (reduce '+ (mapcar '* a b)))(The original code here used "apply" instead of "reduce", but that may hit a limit on the number of arguments.) The name of the lisp function "mapcar" is not very descriptive, but it means something like "perform elementwise". However, you'd end up consing the extra lists (returned from MAPCAR) so, in a production system, most lispers would use a straightforward, non-functional, loop and collect: ;; assume that a and b are the same length. ;; assume that corresponding elements of a and b can be multiplied together. ;; (defun dot-product (a b) (loop for x across a for y across b summing (* a b)))Also, the above would probably have declarations on the types of vectors being multiplied. And you'd like to check that they're of congruent dimensions (not done in above code). Or, if they already included the SERIES package, they might use that; it provides macros which transforms applicative style coding into loops transparently. -- SmugLispWeenie CommonLisp Bear with me here - I'm a relative Lisp newbie, I've never tried this mechanism before, and I realise it's a contrived example. But I just recognised a pattern: we have two versions of a program, one being a little prettier but less efficient than the other, but both obviously equivalent (datatype aside). Might it not be fun to write a CompilerMacro? to teach the Lisp compiler how to translate the pretty version into the efficient one automatically? Here's one I hacked up for learning purposes - benchmarking indicates that it does indeed make the pretty version translate into the efficient code. Great fun! (define-compiler-macro apply (&whole exp operator &rest args) "Optimize (APPLY '+ (MAPCAR '* ....)) into an equivalent loop." (if (and (equal operator ''+) (equal (caar args) 'mapcar) (equal (cadar args) ''*)) (make-sum-of-products (nthcdr 2 (car args))) exp)) (defun make-sum-of-products (list-exps) "Return code to sum the element-wise products of LIST-EXPS." (let ((vars '())) (flet ((for-clause (list-exp) (push (gensym) vars) `(for ,(car vars) in ,list-exp))) `(loop ,@(mapcan #'for-clause list-exps) summing (* ,@vars)))))It seems straight-forward to generalise this to handle similar operations. I bet there're a million caveats, but, is Lisp fun or what? :-) Also, even in a production system, the EightyTwentyRule applies to performance. Some code has to be fast, but most doesn't. If you know which is which, you get to write cute code most of the time. No. changing APPLY would be a spectacularly bad thing to do. But, as I said, if you want code which transforms an applicative-style of coding into an iterative one (transparently), check out the SERIES package, by Waters. I suspected "advising" APPLY would be somehow bad. But why? -- LukeGorrie (who's reading about SERIES..) Function dotProduct(ByVal a As IEnumerable(Of Double), ByVal b As IEnumerable(Of Double)) Return a.Zip(Function(a, b) a * b).Sum() End Function def test(): assert sum([3,4,5]) == 12 assert ElementWiseMultiplication([3,2], [4,5]) == [12,10] assert DotProduct([1,2], [3,-4]) == -5 def ElementWiseMultiplication(list1, list2): return [ list1[i] * list2[i] for i in range(len(list1))] def DotProduct(list1, list2): return sum(ElementWiseMultiplication(list1, list2))It's a bit longer than the lisp solution, but it fits my brain a bit better. YMMV. The Python implementation need not be longer: def dot_product(a, b): return reduce(lambda sum, p: sum + p[0]*p[1], zip(a,b), 0)This is basically the same as the short LISP implementation above. Perhaps Python really almost is LISP with InfixNotation: SteveHowell's version can be reduced to the even shorter: def dot_product(a, b): return sum([a[i]*b[i] for i in range(len(a))])However the most Pythonic way is clean, lazy, and works on any pair of iterables (if you don't need LazyEvaluation, use zip rather than izip): from itertools import izip def dot_product(a, b): return sum(x*y for (x,y) in izip(a,b))Or a more FP style: from itertools import imap from operator import mul def dot_product(a, b): return sum(imap(mul, a, b))Now it looks even more like Lisp. dotProductOf: a and: b ^(a with: b collect: [:eachA :eachB | eachA * eachB]) inject: 0 into: [:sum :each | sum + each]or, as an extention to Float or Number: dotProductOf: aNum ^(self with: aNum collect: [:a :b | a * b]) inject: 0 into: [:sum :each | sum + each]allowing aNum dotProductOf: anotherNum map2 f = (map (uncurry f) .) . zip dotProduct = (sum .) . map2 (*)or, without trying to look too smart and incomprehensible: dotProduct :: Num a => [a] -> [a] -> a dotProduct xs ys = sum (zipWith (*) xs ys)or maybe dotProduct = (sum .) . zipWith (*)which is about as terse as possible, although the partially applied (.) goes as incomprehensible or as idiomatic, depending on experience. ;-) or dot [] [] = 0 dot (x:xs) (y:ys) = x*y + dot xs yswhich is readable. $sum += $a[$_] * $b[$_] for 0 .. $#a; #include <vector> #include <algorithm> template <typename T> T DotProduct(std::vector<T> const &v1, std::vector<T> const &v2) { T result = 0; for (unsigned i = 0; i < std::min(v1.size(), v2.size()); ++i) result += v1[i] * v2[i]; return result; }This code should work for any type T for which the binary operator '*' is defined. The for-loop could also be implemented using for_each<>(), but this would result in longer code, I think. In standard C++ spirit you'd probably use iterators to separate the dot product function from a specific container. Here's the actual implementation of dot-product from VC++ (without user-supplied predicates). It's called inner_product in the C++ standard. This version doesn't handle unequal length vectors like the version above. Note the inconvenient parameter _V, necessary to drive the type system for the return type. template<class _II1, class _II2, class _Ty> inline _Ty inner_product(_II1 _F, _II1 _L, _II2 _X, _Ty _V) { for (; _F != _L; ++_F, ++_X) _V = _V + *_F * *_X; return (_V); } template <int... Args> struct Vector {}; template <typename T, typename U> struct DotProduct; template <> struct DotProduct<Vector<>, Vector<>> { static int const value = 0; }; template <int A, int... As, int B, int... Bs> struct DotProduct<Vector<A,As...>, Vector<B,Bs...>> { static int const value = (A*B) + DotProduct<Vector<As...>, Vector<Bs...>>::value; }; #include <iostream> int main() { std::cout << DotProduct<Vector<1, 2>, Vector<3, -4>>::value << std::endl; } static double DotProduct(IEnumerable<double> v1, IEnumerable<double> v2) { return v1.Zip(v2, (a,b) => a * b).Sum(); } dot_product([A|As], [B|Bs]) -> (A*B) + dot_product(As, Bs); dot_product([], []) -> 0.Would it be correct for DotProduct that if the vectors are of different lengths you only use a prefix of the longer one (as opposed to e.g. triggering an error)? Or are the others like this just because it made the original Lisp version prettier? :-) -- LukeGorrie No it isn't correct. Inner products (of which the usual 'dot product' is one), are defined on inner-product spaces, which are vector spaces which have (unsurprisingly) an inner product. This may or may not be a Hilbert space (i.e., have a norm). In any case, the operation is defined on vectors in a vector space. Vector spaces are defined on a scalar Field, so strictly speaking we should also verify that the scalars live in the same space. Depending how you want to work with the numeric tower in your Lisp, you could do leave the job of constraining the scalars to the '*' operator, and just do: (defun dot-product (v1 v2) (assert (= (length v1) (length v2))) (loop for x across v1 for y across v2 summing (* x y)))Or you could strongly constrain the v1, v2 to some space you are working with, e.g. (check-type v1 (simple-array double-float (3)), eg, an optimized version for a declarations-as-assertions implementions working in double precision euclidean space might look like, venturing into the not-so-elegant world of productionish code that does these checks and works on a more concrete representation (simple arrays of length 3 holding doubles), which may (unfortunately) be needed for efficiency purposes in many of the sorts of things you actually use euclidean 3-vectors for. This example avoids the loop macro, to show a different approach. (defun dot-product (v1 v2) (declare (type (simple-array double-float (3)) v1 v2) (optimize speed)) (assert (= (length v1) (length v2))) (do ((n 0 (1+ n)) (res 0.0d0)) ((>= n (length v1)) res) (declare (type double-float res) (type fixnum n)) (incf res (* (aref v1 n) (aref v2 n)))))So this version looks much like it would in any imperative language.... The nice thing is that you can easily move from abstract & slow to concrete and fast as you develop in Lisp. Most importantly, the restriction to double-floats can be sensibly relaxed within the numeric tower. Also note that this sort of low-level performance can often be best handled by compiler-macros etc, as someone mentioned above. No need to dirty most of the code like this! The length and domain checks should be there in the beginning though. I updated the Erlang version to signal an error if the lists aren't the same lengths. It's still defined for zero-length lists, is that right? What I was fishing for here is a way to show that with pattern-matching it's easy to handle these sort of constraints neatly. -- LukeGorrie (who would switch to ML (or Python!) rather than write such gross Lisp code as that ;-)) Actually, you caught me typing late at night :) From the above description, you should see that an inner product needs a scalar Field, so zero-length tuples are out. Actually, length 1 tuples don't make sense either (then you are just operating in the scalar Field itself. The problem with this sort of thing is that an idea like the 'dot-product' generalizes simply algorithmically that it is easy/tempting to write general code (much like we have notation <a,b> which generalizes quite well). Mathematically, the operation only only make sense when applied to elements 'from the same place'. On paper, this means that you have to be careful you haven't written some nonsense if you are working with multiple inner product spaces. In the computer, something else should enforce this discipline. As long as you aren't after raw speed (think numerical simulations), you would stay with the elegant version. Python certainly isn't fast enough for that (without punting to C code, which you could do in other languages as well, I don't know about ML variants). There is nothing wrong with doing something like: (defun dot-product (v1 v2) (assert (and (= (length v1) (length v2)) (/= 0 1 (length v1)))) (reduce #'+ (mapcar #'* v1 v2)))If you don't mind the consing. On the other hand, if you do work with different spaces regularly, it may be cleanest to use CLOS to designate your inner product spaces... On Python speed for dot products: One might be surprised. I have a Python information retrieval application which does many dot products regularly, and the dot products are hardly a bottleneck. As always, speed depends upon the application, and in this case, I get to have my elegance and use it too. :) Well one might be surprised, but I wouldn't. In the applications I am thinking of above, I would expect something like a million dot-products per time slice. So we are looking at billions-to-trillions of individual dot products. How many are you talking about? The python version above isn't nearly as elegant as the lisp version, and probably isn't faster (I don't know enough about pythons internals to be sure of this). Walking along lists in lisp is very fast. For numerical work, the problem is that you just can't afford any temporary storage --- each dot product (on a suitable domain) *must* boil down to a few assembly instructions (preferably unrolled, if you are dealing with short vectors). I think we all agree here: it depends on the application. We just have different applications. It's not surprise that those of us whose applications don't require cycle-squeezing wince a bit at the heavily optimized code :-) and vice-versa. Absolutely. One nice thing about lisp is the ability to get both, without ForeignFunctionInterface kludges etc; I don't claim lisp is unique in that regard. Although to be fair, sometimes the only practical approach is to revert to a hand-tuned assembly library. That can't be anything but yucky. dot_product([],[],0). dot_product([A|As],[B|Bs],D) :- dot_product(As,Bs,D1), D is (A*B)+D1. function DotProduct(a, b : array of double): double; var i : integer; begin Result := 0; for i := max(low(a), low(b)) to min(high(a), high(b)) do Result := result + a[i] * b[i]; end; select sum(a * b) from tIf in different tables: select sum(a * b) from t,u where t.index = u.index : @+ ( a -- a+ a@ ) dup cell+ swap @ ; : .* ( v1 v2 len -- p ) 0 >r begin ?dup while rot rot @+ rot @+ rot * r> + >r rot 1- repeat 2drop r> ; : m.* ( v1 v2 len -- d ) \ double cell result 0. 2>r begin ?dup while rot rot @+ rot @+ rot m* 2r> d+ 2>r rot 1- repeat 2drop 2r> ;May I ask why your using a while loop to simulate a for loop? Is it to make it even more roten? or recursively... : dot-prod ( v1 v2 len -- p ) 1- ?dup 0= if @ swap @ * else rot dup @ >r cell+ rot dup @ >r cell+ rot recurse 2r> * + then ;or to define dot-products for particular vector lengths... : *step ( v1 v2 -- p v1+ v2+ ) over @ over @ * rot cell+ rot cell+ ; : make-N.*: ( n "name" -- ) 1- >r : r@ 0 do postpone *step loop postpone @ postpone swap postpone @ postpone * r> 0 do postpone + loop postpone ; ; 3 make-N.*: 3.* ( v1 v2 -- p ) \ where v1,v2 point to vectors of length 3 see 3.* \ gforth : 3.* *step *step @ swap @ * + + ; \ tests create v1 1 , 2 , 3 , 4 , 5 , create v2 6 , 7 , 8 , 9 , 10 , v1 v2 3 .* . \ 44 v1 v2 4 m.* . . \ 0 80 v1 v2 5 dot-prod . \ 130 v1 v2 3.* . \ 44There is a better way to do this if your Forth has A & B registers: : *+ ( A:v1 B:v2 ( n -- n! ) @B+ @A+ * + ; \ ?EXIT is equivalent to IF EXIT THEN : .* ( v1 v2 len -- p ) -ROT >B >A 0 SWAP FOR *+ NEXT ; : make-N.*: ( n "name" -- ) >R : POSTPONE >B POSTPONE >A 0 POSTPONE LITERAL R> FOR POSTPONE *+ NEXT POSTPONE ; ; 3 make-N.*: 3.* ( v1 v2 -- p ) \ where v1 and v2 are vectors of length 3 \ Compiles this: : 3.* ( v1 v2 -- p ) >B >A 0 *+ *+ *+ ;If your Forth hasn't A & B registers then you can simulate them as user variables, and it shouldn't be much slower. This is cleaner and faster. -- Michael Morris A +.x BJayLanguage: A +/ .* BKayLanguage: +/ A * B def dot_product l1, l2 l1.zip(l2).inject(0) { |sum,els| sum+els[0]*els[1] } endor def dot_product l1, l2 l1.zip(l2).map { |a,b| a*b }.inject {|sum,el| sum+el} endor def dot_product l1, l2 sum=0 for a,b in l1.zip(l2) sum+= a*b end sum endBut I'm not an expert, so there may be better ways -- GabrieleRenzi Without resorting to intermediate arrays: def dot_product l1, l2 sum=0 l1.zip(l2){|a, b| sum+=a*b} sum end Gotta love ruby blocks! -- Martin Jansson function dotproduct(a,b) { var n = 0, lim = Math.min(a.length,b.length); for (var i = 0; i < lim; i++) n += a[i] * b[i]; return n; } assert( dotproduct([1,2,3,4,5], [6,7,8,9,10]) == 130 ) on min(a, b) if a < b then return a return b end min on dot_product(arr1, arr2) set sum to 0 repeat with i from 1 to min(number of items in arr1, number of items in arr2) set sum to sum + (item i of arr1) * (item i of arr2) end repeat return sum end dot_product set a1 to {1, 2, 3, 4, 5} set a2 to {6, 7, 8, 9, 10} set dp to dot_product(a1, a2) -- should be 130 Standard ML of New Jersey v110.57 [built: Fri Feb 10 21:37:49 2006] - val dotRealLists = ListPair.foldlEq (fn (x, y, s) => s + x * y) 0.0 ; [autoloading] [library $SMLNJ-BASIS/basis.cm is stable] [autoloading done] val dotRealLists = fn : real list * real list -> real - dotRealLists ([1.0, 0.0, 0.5], [0.5, 1.0, 1.0]) ; val it = 1.0 : realor shorter: val dotRealLists = ListPair.foldlEq Real.*+ 0.0 let dotProductInts xs ys = List.fold_left (+) 0 (List.map2 ( * ) xs ys)For floats: let dotProductFloats xs ys = List.fold_left (+.) 0.0 (List.map2 ( *.) xs ys)"List.map2" is the equivalent of Haskell's "zipWith". OCaml does not come with a standard "sum" function, so we had to do a fold explicitly. Extra spacing must be used when putting the multiplication operators inside parentheses for using them in non-infix style, because they contain asterisks, and "(*" and "*)" starts and ends comments in OCaml, respectively. In OCaml, the arithmetic operators for the different numeric types are different, and there are no generic operators for all numeric types, probably for reasons of speed, so the function for each type will have to be written separately. It can be modified to suit other numeric types trivially. It can be written shorter this way: let dotProductInts = List.fold_left2 (fun s x y -> s + x * y) 0 let dotProductFloats = List.fold_left2 (fun s x y -> s +. x *. y) 0.0 $dot_product = array_sum(array_map('bcmul', $array1, $array2));This is kind of a hack, in that we are using "bcmul" the BC arbitrary-precision multiplication function, because we want a multiplication function, but we can't just tell it to use "*". $dot_product = array_sum(array_map(create_function('$a, $b', 'return $a * $b;'), $array1, $array2));This creates a multiplication function on the fly, but is ugly and verbose. PHP 5.3 has decent lambda functions; not quote as verbose as create_function, and a lot more useful. $dot_product = array_sum(array_map(function($a,$b) { return $a*$b; }, $array1, $array2));Using an idiom that's similar to FP's zip: $dot_product = array_sum(array_map('array_product', array_map(null, $array1, $array2))); Vector v = (1,2,3) Vector u = (4,5,6) v.DotProduct(u)or Vector v = (1,2,3) Vector u = (4,5,6) VectorOperation? o o.DotProduct(u,v)? What if you needed to include many operations such as ScalarMultiplication?, CrossProduct etc? Methods shouldn't belong to classes at all! Then you don't have this sort of confusion. to dotprod :a :b ; vector dot product op apply "sum (map "product :a :b) end def dotProduct(as: Seq[Double], bs: Seq[Double]) = { require(as.size == bs.size) (0.0 /: (for ((a, b) <- as.elements zip bs.elements) yield a * b)) (_ + _) }Scala 2.8 has zip and sum for all collection types so: def dotProduct[T <% Double](as: Iterable[T], bs: Iterable[T]) = { require(as.size == bs.size) (for ((a, b) <- as zip bs) yield a * b) sum }
http://c2.com/cgi-bin/wiki?DotProductInManyProgrammingLanguages
CC-MAIN-2016-36
refinedweb
3,274
62.68
The QGraphicsSimpleTextItem class provides a simple text path item that you can add to a QGraphicsScene. More... #include <QGraphicsSimpleTextItem> Inherits: QAbstractGraphicsShapeItem. This class was introduced in Qt 4.2. The QGraphicsSimpleTextItem class provides a simple text path item that you can add to a QGraphicsScene. To set the item's text, you can either pass a QString to QGraphicsSimpleTextItem's constructor, or call setText() to change the text later. To set the text fill color, call setBrush(). The simple text item can have both a fill and an outline; setBrush() will set the text fill (i.e., text color), and setPen() sets the pen that will be used to draw the text outline. (The latter can be slow, especially for complex pens, and items with long text content.) If all you want is to draw a simple line of text, you should call setBrush() only, and leave the pen unset; QGraphicsSimpleTextItem's pen is by default Qt::NoPen. QGraphicsSimpleTextItem uses the text's formatted size and the associated font to provide a reasonable implementation of boundingRect(), shape(), and contains(). You can set the font by calling setFont(). QGraphicsSimpleText does not display rich text; instead, you can use QGraphicsTextItem, which provides full text control capabilities. See also QGraphicsTextItem, QGraphicsPathItem, QGraphicsRectItem, QGraphicsEllipseItem, QGraphicsPixmapItem, QGraphicsPolygonItem, QGraphicsLineItem, and Graphics View Framework. Constructs a QGraphicsSimpleTextItem. parent is passed to QGraphicsItem's constructor. See also QGraphicsScene::addItem(). Constructs a QGraphicsSimpleTextItem, using text as the default plain text. parent is passed to QGraphicsItem's constructor. See also QGraphicsScene::addItem(). Destroys the QGraphicsSimpleTextItem. Reimplemented from QGraphicsItem::boundingRect(). Reimplemented from QGraphicsItem::contains(). Returns the font that is used to draw the item's text. Reimplemented from QGraphicsItem::isObscuredBy(). Reimplemented from QGraphicsItem::opaqueArea(). Reimplemented from QGraphicsItem::paint(). Sets the font that is used to draw the item's text to font. Sets the item's text to text. The text will be displayed as plain text. Newline characters ('\n') as well as characters of type QChar::LineSeparator will cause item to break the text into multiple lines. Reimplemented from QGraphicsItem::shape(). Returns the item's text. Reimplemented from QGraphicsItem::type().
http://doc.qt.nokia.com/main-snapshot/qgraphicssimpletextitem.html
crawl-003
refinedweb
350
53.27
This action might not be possible to undo. Are you sure you want to continue? 1 105TH CONGRESS " 1st Session HOUSE OF REPRESENTATIVES ! REPORT 105–1 IN THE MATTER OF REPRESENTATIVE NEWT GINGRICH OF THE SELECT COMMITTEE ON ETHICS JANUARY 17, 1997.—Referred to the House Calendar and ordered to be printed IN THE MATTER OF REPRESENTATIVE NEWT GINGRICH 1 House Calendar No. 1 105TH CONGRESS " 1st Session HOUSE OF REPRESENTATIVES ! REPORT 105–1 IN THE MATTER OF REPRESENTATIVE NEWT GINGRICH OF THE SELECT COMMITTEE ON ETHICS JANUARY 17, 1997.—Referred to the House Calendar and ordered to be printed U.S. GOVERNMENT PRINTING OFFICE 37–210 WASHINGTON : 1997 LETTER OF TRANSMITTAL CONGRESS OF THE UNITED STATES, Washington, DC, January 17, 1997. Hon. ROBIN CARLE, Clerk, House of Representatives, Washington, DC. DEAR MADAM CLERK: Pursuant to clause 4(e)(3) of Rule 10, and by direction of the Select Committee on Ethics, I herewith submit the attached report, ‘‘In the Matter of Representative Newt Gingrich.’’ Sincerely, NANCY L. JOHNSON, Chairman. (III) CONTENTS Page I. INTRODUCTION ................................................................................................ A. Procedural Background ....................................................................... B. Investigative Process ........................................................................... C. Summary of the Subcommittee’s Factual Findings .......................... 1. AOW/ACTV .................................................................................... 2. Renewing American Civilization .................................................. 3. Failure to Seek Legal Advice ....................................................... 4. Mr. Gingrich’s Statements to the Committee ............................. D. Statement of Alleged Violation ........................................................... II. SUMMARY OF FACTS PERTAINING TO AMERICAN CITIZENS TELEVISION A. GOPAC .................................................................................................. B. American Opportunities Workshop/American Citizens Television ... 1. Background .................................................................................... 2. Planning and Purpose for AOW/ACTV ........................................ 3. Letters Describing Partisan, Political Nature of AOW/ACTV ... 4. AOW/ACTV in Mr. Gingrich’s Congressional District ............... 5. GOPAC’s Connection to ALOF and ACTV .................................. 6. GOPAC Funding of ALOF and ACTV ......................................... III. SUMMARY OF FACTS PERTAINING TO ‘‘RENEWING AMERICAN CIVILIZATION’’ ............................................................................................................ A. Genesis of the Renewing American Civilization Movement and Course ................................................................................................. B. Role of the Course in the Movement .................................................. C. GOPAC and Renewing American Civilization .................................. 1. GOPAC’s Adoption of the Renewing American Civilization Theme .......................................................................................... 2. GOPAC’S Inability To Fund Its Political Projects in 1992 and 1993 .............................................................................................. 3. GOPAC’s Involvement in the Development, Funding, and Management of the Renewing American Civilization Course . a. GOPAC Personnel .................................................................. b. Involvement of GOPAC Charter Members in Course Design ........................................................................................ c. Letters sent by GOPAC ......................................................... D. ‘‘Replacing the Welfare State with an Opportunity Society’’ as a Political Tool .................................................................................... E. Renewing American Civilization House Working Group ................. F. Marketing of the Course ...................................................................... G. Kennesaw State College’s Role in the Course ................................... H. Reinhardt College’s Role in the Course ............................................. I. End of Renewing American Civilization Course ................................ IV. ETHICS COMMITTEE APPROVAL OF COURSE .................................................. V. LEGAL ADVICE SOUGHT AND RECEIVED ........................................................ VI. SUMMARY OF THE REPORT OF THE SUBCOMMITTEE’S EXPERT ..................... A. Introduction .......................................................................................... B. Qualifications of the Subcommittee’s Expert .................................... C. Summary of the Expert’s Conclusions ............................................... 1. The American Citizens Television Program ................................ a. Private Benefit Prohibition ................................................... b. Campaign Intervention Prohibition ...................................... 2. The Renewing American Civilization Course ............................. a. Private Benefit Prohibition ................................................... b. Campaign Intervention Prohibition ...................................... D. Advice Ms. Roady Would Have Given ................................................ VII. SUMMARY OF CONCLUSIONS OF MR. GINGRICH’S TAX COUNSEL .................. (V) 1 1 3 4 4 5 7 8 9 9 9 11 11 13 16 18 19 20 22 22 24 30 30 31 32 32 35 35 38 44 46 51 53 55 55 58 63 63 63 64 65 65 66 67 67 69 70 70 VI Page A. Introduction .......................................................................................... B. Qualifications of Mr. Gingrich’s Tax Counsel .................................... C. Summary of Conclusions of Mr. Gingrich’s Tax Counsel ................. 1. Private Benefit Prohibition .......................................................... 2. Campaign Intervention Prohibition ............................................. D. Advice Mr. Holden Would Have Given .............................................. VIII. SUMMARY OF FACTS PERTAINING TO STATEMENTS MADE TO THE COMMITTEE ............................................................................................................... A. Background .......................................................................................... B. Statements Made by Mr. Gingrich to the Committee, Directly or Through Counsel ........................................................................... 1. Mr. Gingrich’s December 8, 1994 Letter to the Committee ....... 2. March 27, 1995 Letter of Mr. Gingrich’s Attorney to the Committee ........................................................................................... C. Subcommittee’s Inquiry Into Statements Made to the Committee . D. Creation of the December 8, 1994 and March 27, 1995 Letters ...... 1. Creation of the December 8, 1994 Letter .................................... 2. Bases for Statements in the December 8, 1994 Letter .............. 3. Creation of the March 27, 1995 Letter ........................................ 4. Bases for Statements in the March 27, 1995 Letter .................. IX. ANALYSIS AND CONCLUSION ........................................................................... A. Tax Issues ............................................................................................. B. Statements Made to the Committee .................................................. C. Statement of Alleged Violation ........................................................... 1. Deliberations on the Tax Counts ................................................. 2. Deliberations Concerning the Letters ......................................... 3. Discussions with Mr. Gingrich’s Counsel and Recommended Sanctions ..................................................................................... D. Post-December 21, 1996 Activity ........................................................ X. SUMMARY OF FACTS PERTAINING TO USE OF UNOFFICIAL RESOURCES XI. AVAILABILITY OF DOCUMENTS TO INTERNAL REVENUE SERVICE ................. Appendix ................................................................................................................... INDEX TO 70 70 71 72 75 76 77 77 78 78 79 80 81 82 85 87 89 89 89 90 91 92 92 93 96 96 97 99 APPENDIX SUMMARY OF LAW PERTAINING TO ORGANIZATIONS EXEMPT FROM FEDERAL INCOME TAX UNDER SECTION 501(c)(3) OF THE INTERNAL REVENUE CODE A. Introduction ........................................................................................................ B. The Organizational Test and the Operational Test ........................................ 1. Organizational Test ..................................................................................... 2. Operational Test .......................................................................................... a. ‘‘Educational’’ Organizations May Qualify for Exemption Under Section 501(c)(3) ................................................................................. b. To Satisfy the Operational Test, an Organization Must Not Violate the ‘‘Private Benefit’’ Prohibition ...................................................... c. To Satisfy the Operational Test, an Organization Must Not Be an ‘‘Action’’ Organization ................................................................... (i) If an Organization Participates in a Political Campaign, It Is an Action Organization Not Entitled to Exemption Under Section 501(c)(3) .......................................................................... (a) The Prohibition is ‘‘Absolute’’ .............................................. (b) Section 501(c)(3) Organizations May Not Establish or Support a PAC ..................................................................... (c) ‘‘Express Advocacy’’ Is Not Required, and Issue Advocacy Is Prohibited if Used To Convey Support for or Opposition to a Candidate .............................................................. (d) Educational Activities May Constitute Participation or Intervention .......................................................................... (e) Nonpartisan Activities May Constitute Prohibited Political Campaign Participation ................................................ (f) The IRS Has Found Violations of the Prohibition on Political Campaign Participation When an Activity Could Affect or Was Intended To Affect Voters’ Preferences .......... (ii) If a Substantial Part of an Organization’s Activities Is Attempting To Influence Legislation, or Its Primary Goal Can Only Be Accomplished Through Legislation, It Is an ‘‘Action’’ Organization ................................................................................ 99 99 99 100 101 102 110 110 113 115 115 116 118 118 121 VII Page B. The Organizational Test and the Operational Test—Continued 2. Operational Test—Continued c. To Satisfy the Operational Test, an Organization Must Not Be an ‘‘Action’’ Organization—Continued (ii) If a Substantial Part of an Organization’s Activities Is Attempting To Influence Legislation, or Its Primary Goal Can Only Be Accomplished Through Legislation, It Is an ‘‘Action’’ Organization—Continued (a) Definition of ‘‘Legislation’’ .................................................... (b) Definition of ‘‘Attempting To Influence Legislation’’ ......... (c) Definition of ‘‘Substantial’’ ................................................... (d) Circumstances Under Which an Organization’s ‘‘objectives can be achieved only through the passage of legislation’’ ................................................................................... d. To Satisfy the Operational Test, an Organization Must Not Violate the ‘‘Private Inurement’’ Prohibition ......................................... Exhibits .................................................................................................................... 122 122 124 125 126 129 House Calendar No. 1 105TH CONGRESS " HOUSE OF REPRESENTATIVES 1st Session ! REPORT 105–1 IN THE MATTER OF REPRESENTATIVE NEWT GINGRICH JANUARY 17, 1997.—Referred to the House Calendar and ordered to be printed Mrs. JOHNSON from the Select Committee on Ethics, submitted the following I. INTRODUCTION A. Procedural Background On September 7, 1994, a complaint was filed with the Committee on Standards of Official Conduct (‘‘Committee’’) against Representative Newt Gingrich by Ben Jones, Mr. Gingrich’s opponent in his 1994 campaign for re-election. The complaint centered on a course taught by Mr. Gingrich called ‘‘Renewing American Civilization.’’ Among other things, the complaint alleged that Mr. Gingrich had used his congressional staff to work on the course in violation of House Rules. The complaint also alleged that Mr. Gingrich had created a college course under the sponsorship of 501(c)(3) organizations in order ‘‘to meet certain political, not educational, objectives’’ and, therefore, caused a violation of section 501(c)(3) of the Internal Revenue Code to occur. In partial support of the allegation that the course was a partisan, political project, the complaint alleged that the course was under the control of GOPAC, a political action committee of which Mr. Gingrich was the General Chairman. Mr. Gingrich responded to this complaint in letters dated October 4, 1994, and December 8, 1994, but the matter was not resolved before the end of the 103rd Congress. On January 26, 1995, Representative David Bonior filed an amended version of the complaint originally filed by Mr. Jones. It restated the allegations concerning the misuse of tax-exempt organizations and contained additional allegations. Mr. Gingrich responded to that complaint in a letter from his counsel dated March 27, 1995. On December 6, 1995, the Committee voted to initiate a Preliminary Inquiry into the allegations concerning the misuse of tax-exempt organizations. The Committee appointed an Investigative 2 Subcommittee (‘‘Subcommittee’’) and instructed it to: determine if there is reason to believe that Representative Gingrich’s activities in relation to the college course ‘‘Renewing American Civilization’’ were in violation of section 501(c)(3) or whether any foundation qualified under section 501(c)(3), with respect to the course, violated its status with the knowledge and approval of Representative Gingrich * * *. The Committee also resolved to appoint a Special Counsel to assist in the Preliminary Inquiry. On December 22, 1995, the Committee appointed James M. Cole, a partner in the law firm of Bryan Cave LLP, as the Special Counsel. Mr. Cole’s contract was signed January 3, 1996, and he began his work. On September 26, 1996, the Subcommittee announced that, in light of certain facts discovered during the Preliminary Inquiry, the investigation was being expanded to include the following additional areas: (1) Whether (House Rule 43, Cl. 1); (2) Whether Representative Gingrich’s relationship with the Progress and Freedom Foundation, including but not limited to his involvement with the course entitled ‘‘Renewing American Civilization,’’ violated the foundation’s status under 501(c)(3) of the Internal Revenue Code and related regulations (House Rule 43, Cl. 1); (3) Whether Representative Gingrich’s use of the personnel and facilities of the Progress and Freedom Foundation constituted a use of unofficial resources for official purposes (House Rule 45); and (4) Whether Representative Gingrich’s activities on behalf of the Abraham Lincoln Opportunity Foundation violated its status under 501(c)(3) of the Internal Revenue Code and related regulations or whether the Abraham Lincoln Opportunity Foundation violated its status with the knowledge and approval of Representative Gingrich (House Rule 43, Cl. 1). As discussed below, the Subcommittee issued a Statement of Alleged Violation with respect to the initial allegation pertaining to Renewing American Civilization and also with respect to items 1 and 4 above. The Subcommittee did not find any violations of House Rules in regard to the issues set forth in items 2 and 3 above. The Subcommittee, however, decided to recommend that the full Committee make available to the IRS documents produced during the Preliminary Inquiry for use in its ongoing inquiries of 501(c)(3) organizations. In regard to item 3 above, the Subcommittee decided to issue some advice to Members concerning the proper use of outside consultants for official purposes. On January 7, 1997, the House conveyed the matter of Representative Newt Gingrich to the Select Committee on Ethics by its adoption of clause 4(e)(3) of rule X, as contained in House Resolution 5. On January 17, 1997, the Select Committee on Ethics held a sanction hearing in the matter pursuant to committee rule 20. Following the sanction hearing, the Select Committee ordered a report 3 to the House, by a roll call vote of 7–1, recommending that Representative Gingrich be reprimanded and ordered to reimburse the House for some of the costs of the investigation in the amount of $300,000. The following Members voted aye: Mrs. Johnson of Connecticut, Mr. Goss, Mr. Schiff, Mr. Cardin, Ms. Pelosi, Mr. Borski, and Mr. Sawyer. The following Member voted no: Mr. Smith of Texas. The adoption of this report by the House shall constitute such a reprimand and order of reimbursement. Accordingly, the Select Committee recommends that the House adopt a resolution in the following form. HOUSE RESOLUTION — Resolved, That the House adopt the report of the Select Committee on Ethics dated January 17, 1997, In the Matter of Representative Newt Gingrich. Statement Pursuant to Clause 2(l)(3)(A) of Rule XI No oversight findings are considered pertinent. B. Investigative Process The investigation of this matter began on January 3, 1996, and lasted through December 12, 1996. In the course of the investigation, approximately 90 subpoenas or requests for documents were issued, approximately 150,000 pages of documents were reviewed, and approximately 70 people were interviewed. Most of the interviews were conducted by Mr. Cole outside the presence of the Subcommittee. A court reporter transcribed the interviews and the transcripts were made available to the Members of the Subcommittee. Some of the interviews were conducted before the Members of the Subcommittee primarily to explore the issue of whether Mr. Gingrich had provided the Committee, directly or through counsel, inaccurate, unreliable, or incomplete information. During the Preliminary Inquiry, Mr. Cole interviewed Mr. Gingrich twice and Mr. Gingrich appeared before the Subcommittee twice. Several draft discussion documents, with notebooks of exhibits, were prepared for the Subcommittee in order to brief the Members on the findings and status of the Preliminary Inquiry. After receiving the discussion documents, the Subcommittee met to discuss the legal and factual questions at issue. In most investigations, people who were involved in the events under investigation are interviewed and asked to describe the events. This practice has some risk with respect to the reliability of the evidence gathered because, for example, memories fade and can change when a matter becomes controversial and subject to an investigation. One advantage the Subcommittee had in this investigation was the availability of a vast body of documentation from multiple sources that had been created contemporaneously with the events under investigation. A number of documents central to the analysis of the matter, in fact, had been written by Mr. Gingrich. Thus, the documents provided a unique, contemporaneous view of people’s purposes, motivations, and intentions with respect to the facts at issue. This Report relies heavily, but not exclusively, on an 4 analysis of those documents to describe the acts, as well as Mr. Gingrich’s purpose, motivations, and intentions. As the Report proceeds through the facts, there is discussion of conservative and Republican political philosophy. The Committee and the Special Counsel, however, do not take any positions with respect to the validity of this or any other political philosophy, nor do they take any positions with respect to the desirability of the dissemination of this or any other political philosophy. Mr. Gingrich’s political philosophy and its dissemination is discussed only insofar as it is necessary to examine the issues in this matter. C. Summary of the Subcommittee’s Factual Findings The Subcommittee found that in regard to two projects, Mr. Gingrich engaged in activity involving 501(c)(3) organizations that was substantially motivated by partisan, political goals. The Subcommittee also found that Mr. Gingrich provided the Committee with material information about one of those projects that was inaccurate, incomplete, and unreliable. 1. AOW/ACTV The first project was a television program called the American Opportunities Workshop (‘‘AOW’’). It took place in May 1990. The idea for this project came from Mr. Gingrich and he was principally responsible for developing its message. AOW involved broadcasting a television program on the subject of various governmental issues. Mr. Gingrich hoped that this program would help create a ‘‘citizens’ movement.’’ Workshops were set up throughout the country where people could gather to watch the program and be recruited for the citizens’ movement. While the program was educational, the citizens’ movement was also considered a tool to recruit non-voters and people who were apolitical to the Republican Party. The program was deliberately free of any references to Republicans or partisan politics because Mr. Gingrich believed such references would dissuade the target audience of non-voters from becoming involved. AOW started out as a project of GOPAC, a political action committee dedicated to, among other things, achieving Republican control of the United States House of Representatives. Its methods for accomplishing this goal included the development and articulation of a political message and the dissemination of that message as widely as possible. One such avenue of dissemination was AOW. The program, however, consumed a substantial portion of GOPAC’s revenues. Because of the expense, Mr. Gingrich and others at GOPAC decided to transfer the project to a 501(c)(3) organization in order to attract tax-deductible funding. The 501(c)(3) organization chosen was the Abraham Lincoln Opportunity Foundation (‘‘ALOF’’). ALOF was dormant at the time and was revived to sponsor AOW’s successor, American Citizens’ Television (‘‘ACTV’’). ALOF operated out of GOPAC’s offices. Virtually all its officers and employers were simultaneously GOPAC officers or employees. ACTV had the same educational aspects and partisan, political goals as AOW. The principal difference between the two was that ACTV used approximately $260,000 in tax-deductible contributions to fund its operations. ACTV broadcast three television programs 5 in 1990 and then ceased operations. The last program was funded by a 501(c)(4) organization because the show’s content was deemed to be too political for a 501(c)(3) organization. 2. RENEWING AMERICAN CIVILIZATION The second project utilizing 501(c)(3) organizations involved a college course taught by Mr. Gingrich called Renewing American Civilization. Mr. Gingrich developed the course as a subset to and tool of a larger political and cultural movement also called Renewing American Civilization. The goal of this movement, as stated by Mr. Gingrich, was the replacement of the ‘‘welfare state’’ with an ‘‘opportunity society.’’ A primary means of achieving this goal was the development of the movement’s message and the dissemination of that message as widely as possible. Mr. Gingrich intended that a ‘‘Republican majority’’ would be the heart of the movement and that the movement would ‘‘professionalize’’ House Republicans. A method for achieving these goals was to use the movement’s message to ‘‘attract voters, resources, and candidates.’’ According to Mr. Gingrich, the course was, among other things, a primary and essential means to develop and disseminate the message of the movement. The core message of the movement and the course was that the welfare state had failed, that it could not be repaired but had to be replaced, and that it had to be replaced with an opportunity society based on what Mr. Gingrich called the ‘‘Five Pillars of American Civilization.’’ These were: (1) personal strength; (2) entrepreneurial free enterprise; (3) the spirit of invention; (4) quality as defined by Edwards Deming; and (5) the lessons of American history. The message also concentrated on three substantive areas. These were: (1) jobs and economic growth; (2) health; and (3) saving the inner city. This message was also Mr. Gingrich’s main campaign theme in 1993 and 1994 and Mr. Gingrich sought to have Republican candidates adopt the Renewing American Civilization message in their campaigns. In the context of political campaigns, Mr. Gingrich used the term ‘‘welfare state’’ as a negative label for Democrats and the term ‘‘opportunity society’’ as a positive label for Republicans. As General Chairman of GOPAC, Mr. Gingrich decided that GOPAC would use Renewing American Civilization as its political message and theme during 1993–1994. GOPAC, however, was having financial difficulties and could not afford to disseminate its political messages as it had in past years. GOPAC had a number of roles in regard to the course. For example, GOPAC personnel helped develop, manage, promote, and raise funds for the course. GOPAC Charter Members helped develop the idea to teach the course as a means for communicating GOPAC’s message. GOPAC Charter Members at Charter Meetings helped develop the content of the course. GOPAC was ‘‘better off’’ as a result of the nationwide dissemination of the Renewing American Civilization message via the course in that the message GOPAC had adopted and determined to be the one that would help it achieve its goals was broadcast widely and at no cost to GOPAC. The course was taught at Kennesaw State College (‘‘KSC’’) in 1993 and at Reinhardt College in 1994 and 1995. Each course con- 6 sisted of ten lectures and each lecture consisted of approximately four hours of classroom instruction, for a total of forty hours. Mr. Gingrich taught twenty hours of each course and his co-teacher, or occasionally a guest lecturer, taught twenty hours. Students from each of the colleges as well as people who were not students attended the lectures. Mr. Gingrich’s 20-hour portion of the course was taped and distributed to remote sites, referred to as ‘‘site hosts,’’ via satellite, videotape and cable television. As with AOW/ ACTV, Renewing American Civilization involved setting up workshops around the country where people could gather to watch the course. While the course was educational, Mr. Gingrich intended that the workshops would be, among other things, a recruiting tool for GOPAC and the Republican Party. The major costs for the Renewing American Civilization course were for dissemination of the lectures. This expense was primarily paid for by tax-deductible contributions made to the 501(c)(3) organizations that sponsored the course. Over the three years the course was broadcast, approximately $1.2 million was spent on the project. The Kennesaw State College Foundation (‘‘KSCF’’) sponsored the course the first year. All funds raised were turned over to KSCF and dedicated exclusively for the use of the Renewing American Civilization course. 1 KSCF did not, however, manage the course and its role was limited to depositing donations into its bank account and paying bills from that account that were presented to it by the Dean of the KSC Business School. KSCF contracted with the Washington Policy Group, Inc. (‘‘WPG’’) to manage and raise funds for the course’s development, production and distribution. Jeffrey Eisenach, GOPAC’s Executive Director from June 1991 to June 1993 was the president and sole owner of WPG. WPG and Mr. Eisenach played similar roles with respect to AOW/ACTV. When the contract between WPG and KSCF ended in the fall of 1993, the Progress and Freedom Foundation (‘‘PFF’’) assumed the role WPG had with the course at the same rate of compensation. Mr. Eisenach was PFF’s founder and president.. A group of KSC faculty had objected to the course being taught on the campus because of a belief that it was an effort to use the college to disseminate a political message. Because of the Board of Regent’s decision and the controversy, it was decided that the course would be moved to a private college. The course was moved to Reinhardt for the 1994 and 1995 sessions. While there, PFF assumed full responsibility for the course. PFF no longer received payments to run the course but, instead, took in all contributions to the course and paid all the bills, including paying Reinhardt for the use of the college’s video production facilities. All funds for the course were raised by and expended by PFF under its tax-exempt status. 1 As general management and support fees, KSCF kept 2.5% of any money raised and KSC’s Business School kept 7.5% of any money raised. 7 3. FAILURE TO SEEK LEGAL ADVICE Under the Internal Revenue Code, a 501(c)(3) organization must be operated exclusively for exempt purposes. The presence of a single non-exempt purpose, if more than insubstantial in nature, will destroy the exemption regardless of the number or importance of truly exempt purposes. Conferring a benefit on private interests is a non-exempt purpose. Under the Internal Revenue Code, a 501(c)(3) organization is also prohibited from intervening in a political campaign or providing any support to a political action committee. These prohibitions reflect congressional concerns that taxpayer funds not be used to subsidize political activity. During the Preliminary Inquiry, the Subcommittee consulted with an expert in the law of tax-exempt organizations and read materials on the subject. Mr. Gingrich’s activities on behalf of AOW/ACTV and Renewing American Civilization, as well as the activities of others on behalf of those projects done with Mr. Gingrich’s knowledge and approval, were reviewed by the expert. The expert concluded that those activities violated the status of the organizations under section 501(c)(3) in that, among other things, those activities were intended to confer more than insubstantial benefits on GOPAC, Mr. Gingrich, and Republican entities and candidates, and provided support to GOPAC. At Mr. Gingrich’s request, the Subcommittee also heard from tax counsel retained by Mr. Gingrich for the purposes of the Preliminary Inquiry. While that counsel is an experienced tax attorney with a sterling reputation, he has less experience in dealing with tax-exempt organizations law than does the expert retained by the Subcommittee. According to Mr. Gingrich’s tax counsel, the type of activity involved in the AOW/ACTV and Renewing American Civilization projects would not violate the status of the relevant organizations under section 501(c)(3). He opined that once it was determined that an activity was ‘‘educational,’’ as defined by the IRS, and did not have the effect of benefiting a private interest, it did not violate the private benefit prohibition. In the view of Mr. Gingrich’s tax counsel, motivation on the part of an organization’s principals and agents is irrelevant. Further, he opined that a 501(c)(3) organization does not violate the private benefit prohibition or political campaign prohibition through close association with or support of a political action committee unless it specifically calls for the election or defeat of an identifiable political candidate. Both the Subcommittee’s tax expert and Mr. Gingrich’s tax counsel, however, agreed that had Mr. Gingrich sought their advice before embarking on activities of the type involved in AOW/ACTV and the Renewing American Civilization course, each of them would have advised Mr. Gingrich not to use a 501(c)(3) organization as he had in regard to those activities. The Subcommittee’s tax expert said that doing so would violate 501(c)(3). During his appearance before the Subcommittee, Mr. Gingrich’s tax counsel said that he would not have recommended the use of 501(c)(3) organizations to sponsor the course because the combination of politics and 501(c)(3) organizations is an ‘‘explosive mix’’ almost certain to draw the attention of the IRS. 8 Based on the evidence, it was clear that Mr. Gingrich intended that the AOW/ACTV and Renewing American Civilization projects have substantial partisan, political purposes. In addition, he was aware that political activities in the context of 501(c)(3) organizations were problematic. Prior to embarking on these projects, Mr. Gingrich had been involved with another organization that had direct experience with the private benefit prohibition in a political context, the American Campaign Academy. In a 1989 Tax Court opinion issued less than a year before Mr. Gingrich set the AOW/ ACTV project into motion, the Academy was denied its exemption under 501(c)(3) because, although educational, it conferred an impermissible private benefit on Republican candidates and entities. Close associates of Mr. Gingrich were principals in the American Campaign Academy, Mr. Gingrich taught at the Academy, and Mr. Gingrich had been briefed at the time on the tax controversy surrounding the Academy. In addition, Mr. Gingrich stated publicly that he was taking a very aggressive approach to the use of 501(c)(3) organizations in regard to, at least, the Renewing American Civilization course. Taking into account Mr. Gingrich’s background, experience, and sophistication with respect to tax-exempt organizations, and his status as a Member of Congress obligated to maintain high ethical standards, the Subcommittee concluded that Mr. Gingrich should have known to seek appropriate legal advice to ensure that his conduct in regard to the AOW/ACTV and Renewing American Civilization projects was in compliance with 501(c)(3). Had he sought and followed such advice—after having set out all the relevant facts, circumstances, plans, and goals described above—501(c)(3) organizations would not have been used to sponsor Mr. Gingrich’s ACTV and Renewing American Civilization projects. 4. MR. GINGRICH’S STATEMENTS TO THE COMMITTEE In responding to the complaints filed against him concerning the Renewing American Civilization course, Mr. Gingrich submitted several letters to the Committee. His first letter, dated October 4, 1994, did not address the tax issues raised in Mr. Jones’ complaint, but rather responded to the part of the complaint concerning unofficial use of official resources. In it Mr. Gingrich stated that GOPAC, among other organizations, paid people to work on the course. After this response, the Committee wrote Mr. Gingrich and asked him specifically to address issues related to whether the course had a partisan, political aspect to it and, if so, whether it was appropriate for a 501(c)(3) organization to be used to sponsor the course. The Committee also specifically asked whether GOPAC had any relationship to the course. Mr. Gingrich’s letter in response, dated December 8, 1994, was prepared by his attorney, but it was read, approved, and signed by Mr. Gingrich. It stated that the course had no partisan, political aspects to it, that his motivation for teaching the course was not political, and that GOPAC neither was involved in nor received any benefit from any aspect of the course. In his testimony before the Subcommittee, Mr. Gingrich admitted that these statements were not true. When the amended complaint was filed with the Committee in January 1995, Mr. Gingrich’s attorney responded to the complaint 9 on behalf of Mr. Gingrich in a letter dated March 27, 1995. His attorney addressed all the issues in the amended complaint, including the issues related to the Renewing American Civilization course. The letter was signed by Mr. Gingrich’s attorney, but Mr. Gingrich reviewed and approved it prior to its being delivered to the Committee. In an interview with Mr. Cole, Mr. Gingrich stated that if he had seen anything inaccurate in the letter he would have instructed his attorney to correct it. Similar to the December 8, 1994 letter, the March 27, 1995 letter stated that the course had no partisan, political aspects to it, that Mr. Gingrich’s motivation for teaching the course was not political, and that GOPAC had no involvement in nor received any benefit from any aspect of the course. In his testimony before the Subcommittee Mr. Gingrich admitted that these statements were not true. The goal of the letters was to have the complaints dismissed. Of the people involved in drafting or editing the letters, or reviewing them for accuracy, only Mr. Gingrich had personal knowledge of the facts contained in the letters regarding the course. The facts in the letters that were inaccurate, incomplete, and unreliable were material to the Committee’s determination on how to proceed with the tax questions contained in the complaints. D. Statement of Alleged Violation On December 21, 1996, the Subcommittee issued a Statement of Alleged Violation stating that Mr. Gingrich had engaged in conduct that did not reflect creditably on the House of Representatives in that by failing to seek and follow legal advice, Mr. Gingrich failed to take appropriate steps to ensure that activities with respect to the AOW/ACTV project and the Renewing American Civilization project were in accordance with section 501(c)(3); and that on or about December 8, 1994, and on or about March 27, 1995, information was transmitted to the Committee by and on behalf of Mr. Gingrich that was material to matters under consideration by the Committee, which information, as Mr. Gingrich should have known, was inaccurate, incomplete, and unreliable. On December 21, 1996, Mr. Gingrich filed an answer with the Subcommittee admitting to this violation of House Rules. The following is a summary of the findings of the Preliminary Inquiry relevant to the facts as set forth in the Statement of Alleged Violation. II. SUMMARY OF FACTS PERTAINING TELEVISION A. GOPAC TO AMERICAN CITIZENS GOPAC was a political action committee organized under Section 527 of the Internal Revenue Code. As such, contributions to GOPAC were not tax-deductible.2 GOPAC’s goal was to attract peo2 See September 6, 1996 letter from the tax counsel Mr. Gingrich hired during the Preliminary Inquiry, James Holden, at page 41: ‘‘Contributions made to organizations described in section 501(c)(3) qualify generally as charitable deductions under section 170(c)(2). In contrast, contributions made to section 501(c)(4) and section 527 organizations do not qualify as charitable deductions. For this reason, exempt organizations that are described in section 501(c)(3) enjoy the subContinued 10 ple to the Republican party, develop a ‘‘farm).3. * * * * * * * But the Mission Statement demands that we do much more. To create the level of change needed to become a majority, the new Republican doctrine must be communicated to a broader audience, with greater frequency, in a more usable form. GOPAC needs a bigger ‘‘microphone.’’ (emphasis in the original). (Ex. 2, 283). GOPAC continued to support this approach to achieving its goals in subsequent years. For example, as stated in its Report to Shareholders dated April 26, 1993: While both ‘‘message’’ and ‘‘mechanism’’ are important, GOPAC’s comparative advantage lies in developing new ideas—i.e. in the ‘‘message’’). stantial advantage of being able to attract donations that are deductible on the tax returns of contributors.’’ 3 Citations containing a ‘‘Tr.’’ indicate the page of the transcript from a witness’s interview. The date of the interview is also provided in the citation. 11 B. American Opportunities Workshop/American Citizens Television 1. BACKGROUND In early 1990, GOPAC embarked on a project to produce a television program called the American Opportunities Workshop (‘‘A).4 ‘‘Triangle, ‘‘Another product of that would be, of course, if we got people interested * * *, we hoped and believed that eventually they would vote Republican.’’ (12/9/96 Riddle Tr. 13). ‘‘) 4 The Committee’s Special Counsel, James Cole, interviewed Mr. Gingrich on July 17, 1996; July 18, 1996; and December 9, 1996. Mr. Gingrich appeared before the Investigative Subcommittee to give testimony on November 13, 1996, and December 10, 1996. 12 (‘‘ (‘‘ ‘‘Triangle;5 and one on October 27 which was about Taxpayers’ Action Day. The last program was primarily the responsibility of the Council for Citizens Against Government Waste (‘‘CC, indi5 A 1989 draft GOPAC document indicates that one of GOPAC’s projects designed to ‘‘create and disseminate the new Republican doctrine for the 1990’s’’ would be the Education Choice Coalition. (Ex. 2, 284). 13 cates (‘‘W ‘‘personal consulting firm’’ and usually had two or three employees. (7/12/96 Eisenach Tr. 9). WPG used GOPAC office space and equipment as part of its compensation. (11/14 ‘‘provided ‘‘Key ‘‘; * * * * * * * 14. (Ex. 13, Eisenach 4838–4839 (typed version) and Eisenach 4832– 4834 (handwritten version)). ‘‘Key ‘‘Key Factors in a House GOP Majority’’ document were the same as those for AOW and ACTV. (12/7/96 Callaway Tr. 37–38). As stated above, AOW was targeted to non-voters. (Ex. 7, WGC2– 01025). The ‘‘Key Factors in a House GOP Majority’’ document notes that non-voters are the ones to appeal to in order to change the balance of power. AOW/ACTV based the citizens’ movement on the ‘‘Triangle of American Success’’ which was made up of basic American values, entrepreneurial free enterprise, and technological progress. (Ex. 5, FAM 0011; 12/7/96 Callaway Tr. 14). The ‘‘Key ‘‘Key Factors in a House GOP Majority’’ document, Mr. Gingrich states that ‘‘ 15 than it is to the left since they are the party of big city machines, they are the party of the unions, they’re much more tied to the bureaucratic welfare state. (Ex. 15, WGC2 06081, pp. 17–18). The ‘‘Key Factors in a House GOP Majority’’ document notes that the message of the citizens’ movement is designed not to be useful for Democrats because it will be ‘‘very ‘‘the largest focus group project ever undertaken by the Republican Party.’’ (Ex. 14, WGC2 06081, p. 8). He said it concentrated on non-voters under 40 years of age (Ex. 14, WGC2 06081, p. 8) and tested negative language like ‘‘the bureaucratic welfare state’’ and positive language like the ‘‘Triangle of American Success,’’ ‘‘Entrepreneurial Free Enterprise,’’ ‘‘Technological Progress and Innovation,’’ and ‘‘Basic. But second, most young people under 40 are not politicized. The minute you politicize this and you make it narrow and you make it partisan—you lose them. (Ex. 14, WGC2 06081, pp. 23–24).: 16 The theory’s explanation of what is wrong in society was put in terms of ‘‘the bureaucratic welfare state’’ and the ‘‘values of the left.’’ The theory’s explanation of what is good in society was put in terms of ‘‘technological progress,’’ ‘‘entrepreneurial free enterprise,’’ and ‘‘basic American values’’ which were summarized as ‘‘the Triangle of American Success.’’ (Ex. 15, MSI 0030). In describing the target group for building the new governing majority, the report states: The potential for a new governing majority exists because of the large and growing numbers of non-participating citizens in our political system. * * * * * * *. (Ex. 15, MSI 0031–0032). ‘‘New,6 dated March 7, 1990, states: Our May 19th American Opportunities Workshop is the single most exciting project I’ve ever undertaken. I consider this program critical to our efforts to become a Republican majority. * * * * * * * 6 According to Mr. Callaway this letter may have been sent out, but he did not have a specific recollection of it. (12/7/96 Callaway Tr. 49). 17. (Ex. 17, 425–426).. (Ex. 18, 2782–2783). Mr. Gingrich did not recall this document. When asked whether AOW,7: 7 According to Mr. Callaway this letter may have been sent out, but he again did not have a specific recollection of it. (12/7/96 Callaway Tr. 58). 18 These are exciting times at GOPAC and we have been quite busy lately. I am excited about [the] progress of the ‘‘American 8 ‘‘a).9: 8 Jim Tilton was an unpaid senior advisor to GOPAC. He was an attorney and a close friend of Mr. Gingrich. (12/10/96 Gingrich Tr. 10, 11, 56, 57). 9 A GOPAC statement of ‘‘Revenue and Expenses’’ attached to this memorandum shows a single line item for ‘‘AOW/ACTV.’’ (Ex. 21, Eisenach 3957). 19 An area for immediate attention is ‘‘targets. Please make this a high priority. (Ex. 23, GOPAC3 460). Mr. Gingrich did not recall this memorandum and said that there was an effort to target the 6th District— his congressional district—‘‘only in the sense that we hosted [AOW] from there.’’ (12/9/96 Gingrich Tr. 19). ‘‘identified ‘‘Confidential Masterfile Reports’’ that were used to keep track of contributors. Under the section entitled ‘‘Giving ‘‘Projects such as ACTV, AOW and focus groups.’’ (Ex. 26, Eisenach 4251).10 GOPAC’s Report to Charter Members dated November 11, 1990, includes a section on Community Activism. (Ex. 4, GOPAC3 180– 188). In that section it discusses AOW and ACTV. While it states that ACTV is ‘‘legally no longer a GOPAC project,’’ it goes on to dis10 According to Mr. Callaway, the listing of ACTV was a ‘‘bad choice of words.’’ (12/7/96 Callaway Tr. 70). 20 cuss ACTV in terms which indicate that it continued to be treated as a GOPAC project. For example it states that ‘‘Our mission is to establish ACTV as a new, interactive information network.’’ (Ex. 4, GOPAC3 181). The Charter Member Report is worded in a manner that indicates ACTV was considered a GOPAC project. For example, it uses phrases like ‘‘Our goal’’ with ACTV, ‘‘Our next ACTV program,’’ and ‘‘Our program was hosted by * * *.’’ (Ex. 4, GOPAC3 181–182). At the end of the report under the heading ‘‘Getting 11).12 11 There is no evidence that Mr. Gingrich had any significant involvement with this level of the financial aspects of the operations of ALOF. However, because these facts form part of the basis for a recommendation by the Subcommittee that the relevant materials gathered during the preliminary inquiry be made available to the Internal Revenue Service, the matter is set forth in some detail. 12 The original debt from GOPAC listed on ALOF’s tax returns was for $45,247. This is not supported by the checks from GOPAC to ALOF which only reflect $45,000. This additional $247 continued to be listed for the remaining years and was reflected in the ultimate forgiveness of a portion of this debt in 1993. It is not clear what the $247 represents. 21.13 The invoices, along with the previously mentioned loans, totaled $160,537.70. This consisted of rent ($12,718.08), postage and office supplies ($8,455.08), services of staff and consultants ($64,864.54), and the loans ($74,500).14 (Ex. 35, ALOF 0029, ALOF 0027, ALOF 13 Because of her assertion of a Constitutional privilege, the Subcommittee was unable to interview the accountant for GOPAC and ALOF. 14 In the tax return for ALOF for 1990, Part VII asks, among other things, whether ALOF had any transactions with a political action committee involving loans, shared facilities, equipment, or paid employees. Even though GOPAC was a political action committee the return answers ‘‘no’’ to all those questions. (Ex. 28, ALOF 0056). The accountant for ALOF, who was also the accountant for GOPAC, said that she had answered those questions in the negative based on her belief that these questions specifically excluded any transactions with political action committees. (10/31/96 Gilbert Tr. 18-20). She did not discuss this reading of the tax return with anyone at ALOF, but she did fill the form out in this way and they signed it without any quesContinued 22).15 ‘‘ ‘‘American Opportunities Workshop’’ and its successor, American Citizens’ Television. Both of these projects bear significant similarities to the project you have asked us to get involved with, ‘‘Renewing American Civilization.’’ Thus, we enter this undertaking with both enthusiasm and a full understanding of the enormity and complexity of the undertaking. (Ex. 41, Mescon 0651). III. SUMMARY OF FACTS PERTAINING TO ‘‘RENEWING AMERICAN CIVILIZATION’’ A. Genesis of the Renewing American Civilization Movement and Course In his interview with the Special Counsel, Mr. Gingrich said the idea for the course was first developed while he was meeting with Owen Roberts, a GOPAC Charter Member and advisor, for two days in December 1992. (7/17/96 Gingrich Tr. 11–12, 23–24; 7/15/96 Gaylord Tr. 23–24; Ex. 42, GOPAC2 2492). Mr. Gingrich wrote out notes at this meeting and they were distributed to some of his advisors. (Ex. 42, HAN 02103–02125; 6/26/96 Hanser Tr. 28; tions. (10/31/96 Gilbert Tr. 21). This same error occurred in the tax return for 1991. (Ex. 28, ALOF 0069). 15 The amount listed on the Return was $43,785. As referred to earlier, it is unclear what the $247 difference represents. 23 7/15/96 Gaylord Tr. 24–25; 7/12/96 Eisenach Tr. 108–109).16 A review of those notes indicates that the topic of discussion at this meeting centered mostly on a political movement. The notes contain limited references to a course and those are in the context of a means to communicate the message of the movement. The movement was to develop a message and then disseminate and teach that message. (Ex. 42, HAN 02109). One of the important aspects of the movement was the creation of ‘‘disseminating groups and [a] system of communication and education.’’ (Ex. 42, HAN 02109). It also sought to ‘‘professionalize’’ the House Republicans by using the ‘‘message to attract voters, resources and candidates’’ and develop a ‘‘mechanism for winning seats.’’ (Ex. 42, HAN 02110). The ultimate goal of the movement was to replace the welfare state with an opportunity society, and all efforts had to be exclusively directed to that goal. (Ex. 42, HAN 02119). Ultimately, it was envisioned that ‘‘a Republican majority [would be] the heart of the American Movement * * *’’. (Ex. 42, HAN 02117).17 Mr. Gingrich’s role in this movement was to be the ‘‘advocate of civilization,’’ the ‘‘definer of civilization,’’ the ‘‘teacher of the rules of civilization,’’ the ‘‘arouser of those who form civilization,’’ the ‘‘organizer of the pro-civilization activists,’’ and the ‘‘leader (possibly) of the civilizing forces.’’ (Ex. 42, HAN 02104). In doing this, he intended to ‘‘retain a primary focus on elected political power as the central arena and fulcrum by which a free people debate their future and govern themselves.’’ (Ex. 42, HAN 02104). The support systems for this movement included GOPAC, some Republican international organizations, and possibly a foundation. (Ex. 42, HAN 02121). There was substantial discussion of how to disseminate the message of the movement. (Ex. 42, HAN 02109, 02110, 02111). Some of the methods discussed for this dissemination included, ‘‘Possibly a series of courses with audio and videotape followons’’/‘‘Possibly a textbook (plus audio, video, computer) series’’/‘‘Campus (intellectual) appearances on ‘the histories’ Gingrich the Historian applying the lessons of history to public life.’’ (Ex. 2, HAN 02118). One of the tasks listed for 1993 is ‘‘Design vision and its communication and communicate it with modification after feedback.’’ (Ex. 2, HAN 02120). According to Mr. Gingrich, the course was to be a subset of the movement and was to be a primary and essential means for developing and disseminating the message of the movement. (7/17/ 96 Gingrich Tr. 42, 58; 11/13/96 Gingrich Tr. 126–127). Another description of the Renewing American Civilization movement is found in notes of a speech Mr. Gingrich gave on January 16 Among the people who received copies of the notes were Mr. Hanser, Mr. Gaylord and Mr. Eisenach. In a subsequent memorandum to Gay Gaines and Lisa Nelson, as Ms. Gaines and Ms. Nelson were about to take over the management of GOPAC in October 1993, Mr. Gingrich described the roles each of the three men played in his life as follows: 1. Joe Gaylord is empowered to supervise my activities, set my schedule, advise me on all aspects of my life and career. He is my chief counselor and one of my closest friends. * * * 2. Steven Hanser is my chief ideas adviser, close personal friend of twenty years, and chief language thinker. * * * * * * * * * * 4. Jeff Eisenach is our senior intellectual leader and an entrepreneur with great talent and determination. * * * Ex. 43, GDC 11551, 11553). 17 Mr. Gingrich said that he intended the movement to be international in scope. Until some point in 1995, however, its scope was only national. (7/17/96 Gingrich Tr. 33). 24 23, 1993, to the National Review Institute. (Ex. 44, PFF 14473– 14477, PFF 38279–38288).18 In those notes, Mr. Gingrich wrote that ‘‘our generation’s rendezvous with history is to launch a movement to renew American civilization.’’ (Ex. 44, PFF 14474). He noted that a majority of Americans favor renewing American civilization and that ‘‘[w]e are ready to launch a 21st century conservatism that will renew American civilization, transform America from a welfare state into an opportunity society and create a conservative governing majority.’’ (Ex. 44, PFF 14475). Mr. Gingrich then goes on to describe the five pillars of American civilization and the three areas where the movement needs to offer solutions.19 He then wrote that if they develop solutions for those three areas they ‘‘will decisively trump the left. At that point either Clinton will adopt our solutions or the country will fire the president who subsidizes decay and blocks progress.’’ (Ex. 44, PFF 14476). The notes end with the following: We must renew American civilization by studying these principles, networking success stories, applying these success stories to develop programs that will lead to dramatic progress, and then communicating these principles and these opportunities so the American people have a clear choice between progress, renewal, prosperity, safety and freedom within America [sic] civilization versus decay, decline, economic weakness, violent crime and bureaucratic dominance led by a multicultural elite. Given that choice, our movement for renewing American civilization will not just win the White House in 1996, we will elect people at all levels dedicated to constructive proposals. (Ex. 44, PFF 14477). (Emphasis in the original).20 In a draft document entitled ‘‘Renewing American Civilization Vision Statement,’’ written by Mr. Gingrich and dated March 19, 1993, he again described the movement in partisan terms and emphasized that it needed to communicate the vision of renewing American civilization on very large scale. (Ex. 46, WGC 00163– 00171, WGC 00172–00191). He wrote that renewing American civilization will require ‘‘a new party system so we can defeat the Democratic machine and transform American society into a more productive, responsible, safe country by replacing the welfare state with an opportunity society.’’ (Ex. 46, WGC 00163). B. Role of the Course in the Movement Mr. Gingrich was asked about the role of the course in the movement. He said that the course was ‘‘the only way actually to develop and send * * * out’’ the message of the movement. (7/17/96 18 This appears to be the earliest example of Mr. Gingrich speaking about the Renewing American Civilization movement. A draft of this document in Mr. Gingrich’s handwriting is attached to the typed version of the notes. 19 Although not mentioned in this speech, those five pillars and three areas are each separate lectures in what became the course. 20 Two days later Mr. Gingrich delivered a Special Order on the House floor concerning Renewing American Civilization. In this speech he described a movement to renew American civilization, but did not mention the course. He did discuss the five pillars of American civilization and the three areas where solutions needed to be developed. (Ex. 45, LIP 00036–00045). 25 Gingrich Tr. 42). In a later interview, he modified this statement to say that the course was ‘‘clearly the primary and dominant method; it was not the only way one could have done it. But I think it was essential to do it, to have the course.’’ (11/13/96 Gingrich Tr. 126–127). The earliest known documentary reference to the course in the context of the movement is in an agenda for a meeting held on February 15, 1993, at GOPAC’s offices. The meeting had two agenda items: ‘‘I. General Planning/Renewing American Civilization’’ and ‘‘II. Political/GOPAC Issues.’’ (Ex. 47, JR–0000645–0000647). Under the first category, one topic listed is ‘‘American Civilization Class/Uplink.’’ (Ex. 47, JR–0000645). Under the second category two of the items listed are ‘‘GOPAC Political Plan & Schedule’’ and ‘‘Charter Meeting Agenda.’’ (Ex. 47, JR–0000645). 21 Attached to the agenda for this meeting is a ‘‘Mission Statement’’ written by Mr. Gingrich which applied to the overall Renewing American Civilization movement, including the course. (7/12/96 Eisenach Tr. 248–249; 7/17/96 Gingrich Tr. 145–146). It states: We will develop a movement to renew American civilization using the 5 pillars of 21st Century Freedom so people understand freedom and progress is possible and their practical, daily lives can be far better.* As people become convinced American civilization must and can be renewed and the 5 pillars will improve their lives we will encourage them and help them to network together and independently, autonomously initiate improvements wherever they want. However, we will focus on economic growth, health, and saving the inner city as the first three key areas to improve. Our emphasis will be on reshaping law and government to facilitate improvement in all of [A]merican society. We will emphasize elections, candidates and politics as vehicles for change and the news media as a primary vehicle for communications. To the degree Democrats agree with our goals we will work with them but our emphasis is on the Republican Party as the primary vehicle for renewing American civilization. *Renewing American Civilization must be communicated as an intellectual-cultural message with governmental-political consequences. (footnote in original) (Ex. 47, JR–0000646). In February 1993, Mr. Gingrich first approached Mr. Mescon about teaching the course at KSC. (Ex. 48, Mescon 0278; 6/13/96 Mescon Tr. 26–27). Mr. Gingrich had talked to Dr. Mescon in October or November 1992 about the general subject of teaching, but there was no mention of the Renewing American Civilization course at that time. (6/13/96 Mescon Tr. 12–14). The early discussions with Mr. Mescon included the fact that Mr. Gingrich intended to have the Renewing American Civilization course disseminated 21 It is not clear whether the meeting was exclusively a GOPAC meeting, but at least part of the agenda explicitly concerned GOPAC projects. As will be discussed later, GOPAC’s political plan for 1993 centered on Renewing American Civilization. As also discussed below, GOPAC’s April 1993 Charter Meeting was called ‘‘Renewing American Civilization’’ and employed breakout sessions for Charter Members to critique and improve individual components of the course on Renewing American Civilization. (7/17/96 Gingrich Tr. 69–70; 7/12/96 Eisenach Tr. 144–146; 7/15/96 Gaylord Tr. 46). 26 through a satellite uplink system. (Ex. 49, Mescon 0664; 6/13/96 Mescon Tr. 29–30). Shortly before this discussion with Mr. Mescon, in late January 1993, Mr. Gingrich met with a group of GOPAC Charter Members. In a letter written some months later to GOPAC Charter Members, Mr. Gingrich described the meeting as follows: During our meeting in January, a number of Charter Members were kind enough to take part in a planning session on ‘‘Renewing American Civilization.’’ That session not only affected the substance of what the message was to be, but also how best the new message of positive solutions could be disseminated to this nation’s decision makers—elected officials, civic and business leaders, the media and individual voters. In addition to my present avenues of communication I decided to add an avenue close to my heart, that being teaching. I have agreed with Kennesaw State College, * * * to teach ‘‘Renewing American Civilization’’ as a for-credit class four times during the next four years. Importantly, we made the decision to have the class available as a ‘‘teleseminar’’ to students all across the country, reaching college campuses, businesses, civic organizations, and individuals through a live ‘‘uplink,’’ video tapes and audio tapes. Our hope is to have at least 50,000 individuals taking the class this fall and to have trained 200,000 knowledgeable citizen activists by 1996 who will support the principles and goals we have set. (Ex. 50, Kohler 137–138). 22 During an interview with the Special Counsel, Mr. Gingrich said he doubted that he had written this letter and said that the remark in the letter that the Charter Members’ comments played a large role in developing the course ‘‘exaggerates the role of GOPAC.’’ The letter was written to ‘‘flatter’’ the Charter Members. (11/13/96 Gingrich Tr. 129–130). In a March 29, 1993 memorandum, Mr. Gingrich specifically connects the course with the political goals of the movement. The memorandum is entitled ‘‘Renewing American Civilization as a defining concept’’ and is directed to ‘‘Various Gingrich Staffs.’’ 23 The original draft of the memorandum is in Mr. Gingrich’s handwriting. (Ex. 51, GDC 08891–08892, GDC 10236–10238). In the memorandum, Mr. Gingrich wrote: I believe the vision of renewing American civilization will allow us to orient and focus our activities for a long time to come. At every level from the national focus of the Whip office to the 6th district of Georgia focus of the Congressional office to the national political education efforts of GOPAC 22 The letter goes on to state that: [L]et me emphasize very strongly that the ‘‘Renewing American Civilization’’ project is not being carried out under the auspices of GOPAC, but rather by Kennesaw State College and the Kennesaw State College Foundation. We will not be relying on GOPAC staff to support the class, and I am not asking you for financial support. (Ex. 50, Kohler 138) (emphasis in the original). 23 At the top of this memorandum is a handwritten notation (not Mr. Gingrich’s) stating: ‘‘Tuesday 4 p.m. GOPAC Mtg.’’ (Ex. 51, GDC 08891). 27 and the re-election efforts of FONG 24 we should be able to use the ideas, language and concepts of renewing American civilization. (Ex. 51, GDC 08891). In the memorandum, he describes a process for the dissemination of the message of Renewing American Civilization to virtually every person he talks to. This dissemination includes a copy of the Special Order speech and a one-page outline of the course. He then goes on to describe the role of the course in this process: The course is only one in a series of strategies designed to implement a strategy of renewing American civilization. (Ex. 51, GDC 08891). Another strategy involving the course is: Getting Republican activists committed to renewing American civilization, to setting up workshops built around the course, and to opening the party up to every citizen who wants to renew American civilization. (Ex. 51, GDC 08892). 25 Jana Rogers, the Site Host Coordinator for the course in 1993, was shown a copy of this memorandum and said she had seen it in the course of her work at GOPAC. (7/3/96 Rogers Tr. 64). She said that this represented what she was doing in her job with the course. (7/3/96 Rogers Tr. 67–69). Steve Hanser, a paid GOPAC consultant and someone who worked on the course, also said that the contents of the memorandum were consistent with the strategy related to the movement. (6/28/96 Hanser Tr. 42– 45). The most direct description of the role of the course in relation to the movement to renew American civilization is set out in a document which Mr. Gingrich indicates he wrote. (7/17/96 Gingrich Tr. 162–163). The document has a fax stamp date of May 13, 1993 and indicates it is from the Republican Whip’s Office. (Ex. 52, GDC 10639–10649). The document has three parts to it. The first is entitled ‘‘Renewing America Vision’’ (Ex. 52, GDC 10639–10643); the second is entitled ‘‘Renewing America Strategies’’ (Ex. 52, GDC 10644–10646); and the third is entitled ‘‘Renewing American Civilization Our Goal.’’ (Ex. 52, GDC 10647–10649). Mr. Gingrich said that the third part was actually a separate document. (7/17/96 Gingrich Tr. 162–164). While all three parts are labeled ‘‘draft,’’ the document was distributed to a number of Mr. Gingrich’s staff members and associates, including Mr. Hanser, Ms. Prochnow, Ms. Rogers, Mr. Gaylord, Mr. Eisenach, and Allan Lipsett (a press secretary). Each of the recipients of the document have described it as an accurate description of the Renewing American Civilization movement. (6/28/96 Hanser Tr. 48, 53; 7/10/96 Prochnow Tr. 70–71; 7/3/96 Rogers Tr. 71–75; 7/15/96 Gaylord Tr. 66–67; 7/12/96 Eisenach Tr. 148–149, 272–275; Lipsett Tr. 30–31). 26 In the first section, Mr. Gingrich wrote: stands for Mr. Gingrich’s campaign organization, ‘‘Friends of Newt Gingrich.’’ ‘‘party’’ referred to in the quote is the Republican Party. (11/13/96 Gingrich Tr. 80). Eisenach apparently sent a copy of this to a GOPAC supporter in preparation for a meeting in May of 1993. (7/12/96 Eisenach Tr. 146–149). In the accompanying letter, Mr. Eisenach said: ‘‘The enclosed materials provide some background for our discussions, which I expect will begin with a review of the Vision, Strategies and Goals of our efforts to Renew AmerContinued 25 The 26 Mr. 24 ‘‘FONG’’ 28 The challenge to us is to be positive, to be specific, to be intellectually serious, and to be able to communicate in clear language a clear vision of the American people and why it is possible to create that America in our generation. Once the American people understand what they can have they will insist that their politicians abolish the welfare state which is crippling them, their children, and their country and that they replace it with an opportunity society based on historically proven principles that we see working all around us. (Ex. 52, GDC 10643). In the second portion of the document, Mr. Gingrich describes how the vision of renewing America will be accomplished. He lists thirteen separate efforts that fall into categories of communication of the ideas in clear language, educating people in the principles of replacing the welfare state with an opportunity society, and recruiting public officials and activists to implement the doctrines of renewing American civilization. (Ex. 52, GDC 10644–10646). In the third section, Mr. Gingrich explicitly connects the course to the movement. First he starts out with three propositions that form the core of the course: (1) a refrain he refers to as the ‘‘four can’ts;’’ 27 (2) the welfare state has failed; and (3) the welfare state must be replaced because it cannot be repaired. (Ex. 52, GDC 10647; see also Ex. 54, PFF 18361, 18365–18367). He then described the goal of the movement: Our overall goal is to develop a blueprint for renewing America by replacing the welfare state, recruit, discover, arouse and network together 200,000 activists including candidates for elected office at all levels, and arouse enough volunteers and contributors to win a sweeping victory in 1996 and then actually implement our victory in the first three months of 1997. Our specific goals are to: 1. By April 1996 have a thorough, practical blueprint for replacing the welfare state that can be understood and supported by voters and activists. We will teach a course on Renewing American civilization on ten Saturday mornings this fall and make it available by satellite, by audio and video tape and by computer to interested activists across the country. A month will then be spent redesigning the course based on feedback and better ideas. Then the course will be retaught in Winter Quarter 1994. It will then be rethought and redesigned for nine months of critical reevaluation based on active working groups actually applying ideas across the country the course will be taught for one final time in Winter Quarter 1996. ican Civilization. The class Newt is teaching at Kennesaw State College this Fall is central to that effort, and GOPAC and the newly created Progress & Freedom Foundation both play important roles as well. (Ex. 13, GOPAC2 2337).’’ 27 This refrain goes as follows: ‘‘You cannot maintain a civilization with twelve-year-olds having babies, fifteen-year-olds shooting each other, seventeen-year-olds dying of AIDS, and eighteen-year-olds getting diplomas they can’t read.’’ 29 2. Have created a movement and momentum which require the national press corps to actually study the material in order to report the phenomenon thus infecting them with new ideas, new language and new perspectives. 3. Have a cadre of at least 200,000 people committed to the general ideas so they are creating an echo effect on talk radio and in letters to the editor and most of our candidates and campaigns reflect the concepts of renewing America. Replacing the welfare state will require about 200,000 activists (willing to learn now [sic] to replace the welfare state, to run for office and to actually replace the welfare state once in office) and about six million supporters (willing to write checks, put up yard signs, or do a half day’s volunteer work). (Ex. 52, GDC 10647–10649). The ‘‘sweeping victory’’ referred to above is by Republicans. (11/13/96 Gingrich Tr. 86). The reference to ‘‘our candidates’’ above is to Republican candidates. (11/13/96 Gingrich Tr. 90). According to Mr. Gingrich, Mr. Gaylord, and Mr. Eisenach, the three goals set forth above were to be accomplished by the course. (7/17/96 Gingrich Tr. 174–179; 7/15/96 Gaylord Tr. 66–67; 7/12/96 Eisenach Tr. 225; Ex. 55, GOPAC2 2419; Ex. 56, GOPAC2 2172–2173; Ex. 57, Mescon 0626). In various descriptions of the course, Mr. Gingrich stated that his intention was to teach it over a four-year period. After each teaching of the course he intended to have it reviewed and improved. The ultimate goal was to have a final product developed by April of 1996. (7/17/96 Gingrich Tr. 109; Ex. 56, GOPAC2 2170). An explanation of this goal is found in a three-page document, in Mr. Gingrich’s handwriting, entitled ‘‘End State April 1996.’’ (Ex. 58, PFF 20107–20109). Mr. Gingrich said he wrote this document early in the process of developing the movement and described it as a statement of where he hoped to be by April 1996 in regard to the movement and the course. (7/17/96 Gingrich Tr. 108–115). On the first page he wrote that the 200,000 plus activists will have a common language and general vision of renewing America, and a commitment to replacing the welfare state. In addition, ‘‘[v]irtually all Republican incumbents and candidates [will] have the common language and goals.’’ (Ex. 58, PFF 20107). On the second page he wrote that the ‘‘Republican platform will clearly be shaped by the vision, language, goals and analysis of renewing America.’’ (Ex. 58, PFF 20108). In addition, virtually all Republican Presidential candidates will broadly agree on that vision, language, goals and analysis. (Ex. 58, PFF 20108). The Clinton administration and the Democratic Party will be measured by the vision, principles and goals of renewing America and there will be virtual agreement that the welfare state has failed. (Ex. 58, PFF 20108). On the last page Mr. Gingrich wrote a timeline for the course running from September of 1993 through March of 1996. At the point on the timeline where November 1994 appears, he wrote the word ‘‘Election.’’ (Ex. 58, PFF 20109). When Mr. Hanser was asked about this document he said that the vision, language, and concepts of the Renewing American Civilization movement discussed in the document were 30 being developed in the course. (6/28/96 Hanser Tr. 53). He went on to say that ‘‘End State’’ was ‘‘an application of those ideas to a specific political end, which is one of the purposes, remember, for the course.’’ (6/28/96 Hanser Tr. 54). There was an appreciation that this would be primarily a Republican endeavor. (6/28/96 Hanser Tr. 30). C. GOPAC and Renewing American Civilization As discussed above, GOPAC was a political action committee dedicated to, among other things, achieving Republican control of the United States House of Representatives. (11/13/96 Gingrich Tr. 169; 7/3/96 Rogers Tr. 38–40). One of the methods it used was the creation of a political message and the dissemination of that message. (7/12/96 Eisenach Tr. 18–19; 6/28/96 Hanser Tr. 13–14; 7/3/ 96 Rogers Tr. 36). The tool principally used by GOPAC to disseminate its message was audiotapes and videotapes. These were sent to Republican activists, elected officials, potential candidates, and the public. The ultimate purpose of this effort was to help Republicans win elections. (6/27/96 Nelson Tr. 21–22; 7/15/96 Gaylord Tr. 37, 39; 7/3/96 Rogers Tr. 35–36). 1. GOPAC’S ADOPTION OF THE RENEWING AMERICAN CIVILIZATION THEME At least as of late January 1993, Mr. Gingrich and Mr. Eisenach had decided that GOPAC’s political message for 1993 and 1994 would be ‘‘Renewing American Civilization.’’ 28 (Ex. 59, PFF 37584– 37590; 11/13/96 Gingrich Tr. 157; 7/17/96 Gingrich Tr. 61–62, 74; 7/15/96 Gaylord Tr. 35–36, 42–43; 7/3/96 Rogers Tr. 35, 54–56; 6/ 28/96 Taylor Tr. 26; 6/27/96 Nelson Tr. 34, 46). As described in a February 1993 memorandum over Mr. Gingrich’s name to GOPAC Charter Members: GOPAC’s core mission—to provide the ideas and the message for Republicans to win at the grass roots—is now more important than ever, and we have important plans for 1993 and for the 1993–1994 cycle. The final enclosure is a memorandum from Jeff Eisenach outlining our 1993 program which I encourage you to review carefully and, again, let me know what you think. (Ex. 60, PFF 37569). The attached memorandum, dated February 1, 1993, is from Mr. Eisenach to Mr. Gingrich and references their recent discussions concerning GOPAC’s political program for 1993. (Ex. 59, PFF 37584–37590). It then lists five different programs. The fourth one states: (4) Message Development/’’Renewing American Civilization’’—focus group project designed to test and improve the ‘‘Renewing American Civilization’’ message in preparation for its use in 1993 legislative campaigns and 1994 Congressional races. 28 As mentioned above, the earliest mention of the Renewing American Civilization course was in February 1993. (Ex. 47, JR–0000646). 31 (Ex. 59, PFF 37584) (emphasis in original). Of the other four programs listed, three relate directly to the use of the Renewing American Civilization message. The fourth—the ‘‘ ‘Tory (Franchise) Model’ R & D’’—was not done. (7/12/96 Eisenach Tr. 188). This same political program was also listed in two separate GOPAC documents dated April 26, 1993. One is entitled ‘‘1993 GOPAC POLITICAL PROGRAM’’ (Ex. 61, PP001187–00193) and the other is the ‘‘GOPAC Report to Shareholders.’’ (Ex. 62, Eisenach 2536–2545). The first page of the Report to Shareholders states: The challenge facing Republicans, however, is an awesome one: We must build a governing majority, founded on basic principles, that is prepared to do what we failed to do during the last 12 years: Replace the Welfare State with an Opportunity Society and demonstrate that our ideas are the key to progress, freedom and the Renewal of American Civilization. (Ex. 62, Eisenach 2536). In describing the political programs, these documents provide status reports that indicate that the Renewing American Civilization message is at the center of each project. Under ‘‘Off-Year State Legislative Races (New Jersey, Virginia)’’ the project is described as ‘‘Newt speaking at and teaching training seminar for candidates at [a June 5, 1993] Virginia Republican Convention.’’ (Ex. 61, PP001187; Ex. 62, Eisenach 2540). 29 As discussed below, that speech and training session centered on the Renewing American Civilization message. Under ‘‘Ongoing Political Activities’’ the first aspect of the project is described as sending tapes and establishing a training module on Renewing American Civilization and health care. (Ex. 61, PP001187; Ex. 62, Eisenach 2540). Under ‘‘Curriculum Update and Expansion’’ the project is described as the production of new training tapes based on Mr. Gingrich’s session at the Virginia Republican Convention. (Ex. 61, PP01189; Ex. 62, Eisenach 2541). 30 2. GOPAC’S INABILITY TO FUND ITS POLITICAL PROJECTS IN 1992 AND 1993 At the end of 1992, GOPAC was at least $250,000 short of its target income (Ex. 65, PFF 38054) and financial problems lasted throughout 1993. (7/15/96 Gaylord Tr. 71–72). Because of these financial shortfalls, GOPAC had to curtail its political projects, particularly the tape program described above. (Ex. 65, PFF 38054– 38060; Ex. 66, WGC 07428; 7/15/96 Gaylord Tr. 71–72, 76). For example, according to Mr. Gaylord, GOPAC usually sent out eight tapes a year; however, in 1993, it only sent out two. (7/15/96 Gay29 It is not clear whether any work was done in New Jersey because that state had a Republican legislature and did not need GOPAC’s help. (7/15/96 Gaylord Tr. 42). 30 GOPAC later produced two tapes from the session. One was called ‘‘Renewing American Civilization’’ and was mailed to 8,742 people. (Ex. 63, JG 000001693). The other was called ‘‘Leading the Majority’’ and became a major training tool for GOPAC, used at least into 1996. (6/27/96 Nelson Tr. 18). Both are based on the Renewing American Civilization message and contain the core elements of the course. The ‘‘Renewing American Civilization’’ tape contains more of the RAC philosophy than the ‘‘Leading the Majority’’ tape, however, both contain the basics of the course that Mr. Gingrich describes as the ‘‘central proposition’’ or ‘‘heart of the course.’’ (Ex. 56, GOPAC2 2146–2209; Ex. 64, PP000330–000337; Ex. 54, PFF 18361, 18365– 18367). 32 lord Tr. 76). One of these was the ‘‘Renewing American Civilization’’ tape made from Mr. Gingrich’s June 1993 training session at the Virginia Republican Convention (Ex. 63, JG 000001693). Accompanying the mailing of this tape was a letter from Joe Gaylord in his role as Chairman of GOPAC. That letter states:. (Ex. 67, WGC 06215). In light of GOPAC’s poor financial condition, the dissemination of the Renewing American Civilization message through the course was beneficial to its political projects. In this regard, the following exchange occurred with Mr. Gingrich: Mr. Cole: [I]s one of the things GOPAC wanted to have done during 1993 and 1994 was the dissemination of its message; is that correct? Mr. Gingrich: Yes. Mr. Cole: GOPAC also did not have much money in those years; is that correct? Mr. Gingrich: That is correct. Particularly—it gets better in ’94, but ’93 was very tight. Mr. Cole: That curtailed how much it could spend on disseminating its message? Mr. Gingrich: Right. Mr. Cole: The message that it was trying to disseminate was the Renewing American Civilization message; is that right? Mr. Gingrich: Was the theme, yes. (11/13/96 Gingrich Tr. 157–158). With respect to whether the dissemination of the course benefited GOPAC, the following exchange occurred: Mr. Cole: Was GOPAC better off in a situation where the message that it had chosen as its political message for those years was being disseminated by the course? Was it better off? Mr. Gingrich: The answer is yes. (11/13/96 Gingrich Tr. 167). 3. GOPAC’S INVOLVEMENT IN THE DEVELOPMENT, FUNDING, AND MANAGEMENT OF THE RENEWING AMERICAN CIVILIZATION COURSE a. GOPAC personnel Starting at least as early as February 1993, Mr. Eisenach, then GOPAC’s Executive Director, was involved in developing the Renewing American Civilization course. Although Mr. Eisenach has stated that Mr. Gaylord was responsible for the development of the course until mid-May 1993 (7/12/96 Eisenach Tr. 71–75; Ex. 68, 33 Eisenach Testimony Before House Ethics Committee at Tr. 142; Ex. 69, PFF 1167), Mr. Gaylord stated that he never had such a responsibility. (7/15/96 Gaylord Tr. 15–18). Additionally, Mr. Gingrich and others involved in the development of the course identified Mr. Eisenach as the person primarily responsible for the development of the course from early on. (7/17/96 Gingrich Tr. 117, 121; 6/13/96 Mescon Tr. 30–31; 6/28/96 Hanser Tr. 74–75; 7/3/96 Rogers Tr. 17–18, 22). 31 Several documents also establish Mr. Eisenach’s role in the development of the course starting at an early stage. One document written by Mr. Eisenach is dated February 25, 1993, and shows him, as well as others, tasked with course development and marketing. (Ex. 70, PFF 16628). A memorandum from Mr. Gingrich to Mr. Mescon, dated March 1, 1993, describes how Mr. Eisenach is involved in contacting a number of institutions in regard to funding for the course. (Ex. 71, KSC 3491). Aside from Mr. Eisenach, other people affiliated with GOPAC were involved in the development of the course. Mr. Gingrich was General Chairman of GOPAC and had a substantial role in the course. Jana Rogers served as Mr. Eisenach’s executive assistant at GOPAC during the early part of 1993 and in that role worked on the development of the course. (7/3/96 Rogers Tr. 16–17). In June 1993, she temporarily left GOPAC at Mr. Eisenach’s request to become the course’s Site Host Coordinator. As a condition of her becoming the site host coordinator, she received assurances from both Mr. Eisenach and Mr. Gaylord that she could return to GOPAC when she had finished her assignment with the course. (7/3/96 Rogers Tr. 12–16). After approximately five months as the course’s Site Host Coordinator, she returned to GOPAC for a brief time. (7/3/96 Rogers 24–25). Steve Hanser, a member of the GOPAC Board and a paid GOPAC consultant, helped develop the course. (6/28/96 Hanser Tr. 10, 19–21). Mr. Gaylord was a paid consultant for GOPAC and had a role in developing the course. (7/15/96 Gaylord Tr. 15). Pamla Prochnow was hired as the Finance Director for GOPAC in April 1993. 32 Ms. Prochnow spent a portion of her early time at GOPAC raising funds for the course. (7/10/96 Prochnow Tr. 14–16; 6/13/96 Mescon Tr. 63–67, 82; Ex. 74, Documents produced by Prochnow). 33 A number of the people and entities she contacted were GOPAC supporters. In fact, according to Mr. Eisenach, approximately half of the first year’s funding for the course came from GOPAC supporters. (Ex. 69, PFF 1168–1169). Some of those people also helped fund the course in 1994. (See attachments to Ex. 69, PFF 1252–1277) (the documents contain Mr. Eisenach’s marks 31 The February 15, 1993, agenda for the meeting where the RAC course and other GOPAC issues were discussed, lists Mr. Eisenach as an attendee, but does not list Mr. Gaylord as being present. (Ex. 47, JR–0000645). 32 During her interviewing process, Ms. Prochnow was provided with materials to help her understand the goals of GOPAC. (Ex. 72, GOPAC2 0529). Although she has no specific recollection as to what these materials were, she believes they were materials related to the Renewing American Civilization movement. (7/10/96 Prochnow Tr. 18–19; Ex. 73, PP000459–000463; PP00778). 33 Mr. Eisenach has stated that he did not ask Ms. Prochnow to do this fundraising work, but rather Mr. Gaylord did. (7/12/96 Eisenach Tr. 71, 75; Ex. 65, PFF 1168). However, both Mr. Gaylord and Ms. Prochnow clearly state that it was Mr. Eisenach, not Mr. Gaylord, who directed Ms. Prochnow to perform the fundraising work. (7/15/96 Gaylord Tr. 16, 17; 7/10/96 Prochnow Tr. 14, 73–74; Ex. 71, Letter dated July 25, 1996, from Prochnow’s attorney). 34 of ‘‘G’’ next to the people, companies, and foundations that were donors or related to donors to GOPAC.)) When Mr. Eisenach resigned from GOPAC and assumed the title of the course’s project director, two GOPAC employees joined him in his efforts. Kelly Goodsell had been Mr. Eisenach’s Administrative Assistant at GOPAC since March of 1993 (7/9/96 Goodsell Tr. 8, 11), and Michael DuGally had been an employee at GOPAC since January 1992. (7/19/96 DuGally Tr. 9–10). Both went to work on the course as employees of Mr. Eisenach’s Washington Policy Group (‘‘WPG’’).34 In the contract between WPG and KSCF, it was understood that WPG would devote one-half of the time of its employees to working on the course. WPG had only one other client at this time—GOPAC. In its contract with GOPAC, WPG was to receive the same monthly fee as was being paid by KSCF in return for one-half of the time of WPG’s employees. (Ex. 76, PFF 37450– 37451). The contract also stated that to the extent that WPG did not devote full time to KSCF and GOPAC projects, an adjustment in the fee paid to WPG would be made. (Ex. 76, PFF 37450). Neither Ms. Goodsell nor Mr. DuGally worked on any GOPAC project after they started working on the course in June of 1993. (7/9/96 Goodsell Tr. 8, 10–11; 7/19/96 DuGally Tr. 14). Mr. Eisenach said that he spent at the most one-third of his time during this period on GOPAC projects. (7/12/96 Eisenach Tr. 36–37). No adjustment to WPG’s fee was made by GOPAC. (7/12/96 Eisenach Tr. 44).35 The February 15, 1993, agenda discussed above also gives some indication of GOPAC’s role in the development of the Renewing American Civilization course. (Ex. 47, JR–0000645–0000647). Of the eight attendees at that meeting, five worked for or were closely associated with GOPAC (Mr. DuGally, Mr. Eisenach and Ms. Rogers were employees, Mr. Hanser was a member of the Board and a paid GOPAC consultant, and Mr. Gingrich was the General Chairman). Furthermore, the agenda for that meeting indicates that GOPAC political issues were to be discussed as well as course planning issues. Two of the GOPAC political issues apparently related to: (1) the political program described in the February 1, 1993, memorandum which lists four of GOPAC’s five political projects as relating to Renewing American Civilization (Ex. 60, PFF 37569–37576), and (2) GOPAC’s Charter Meeting agenda entitled ‘‘Renewing American Civilization.’’ As discussed below, this Charter Meeting included breakout sessions to help develop a number of the lectures for the course, as well as GOPAC’s message for the 1993–1994 election cycle. (Ex. 78, PP00448–PP000452). As Mr. Gingrich stated in his interview, his intention was to have GOPAC use Renewing American Civilization as its message during this time frame. (7/17/96 Gingrich Tr. 74; 7/3/96 Rogers Tr. 54–56). In 1993 Mr. Eisenach periodically produced a list of GOPAC projects. The list is entitled ‘‘Major Projects Underway’’ and was 34 As discussed earlier, WPG was a corporation formed by Mr. Eisenach which had a contract with KSCF to run all aspects of the course. 35 The only other person who was involved in the early development of the course was Nancy Desmond. She did not work for GOPAC, but had been a volunteer at Mr. Gingrich’s campaign office for approximately a year before starting to work on the course. (6/13/96 Desmond Tr. 15– 16). She continued to work as a volunteer for Mr. Gingrich’s campaign until July of 1993, when she was told to resign from the campaign because of the perceived negative image her two roles would project. (6/13/96 Desmond Tr. 37–38; Ex. 77, PFF 38289). 35 used for staff meetings. (7/12/96 Eisenach Tr. 213; 7/15/96 Gaylord Tr. 79–80; 6/28/96 Taylor Tr. 43–44). Items related to the Renewing American Civilization course were listed in several places on GOPAC’s project sheets. For example, from April 1993 through at least June 1993, ‘‘Renewing American Civilization Support’’ is listed under the ‘‘Planning/Other’’ section of GOPAC’s projects sheets. (Ex. 79, JG 000001139, JG 000001152, JG 000001173, JG 000001270). Another entry which appears a number of times under ‘‘Planning/Other’’ is ‘‘RAC Pert Chart, etc.’’ (Ex. 79, JG 000001152, JG 000001173, JG 000001270). It refers to a time-line Mr. Eisenach wrote while he was the Executive Director of GOPAC relating to the development of the various components of the course, including marketing and site coordination, funding, readings, and the course textbook. (Ex. 80, PFF 7529–7533; 7/12/96 Eisenach Tr. 212–213). Finally, under the heading ‘‘Political’’ on the May 7, 1993, project sheet, is listed the phrase ‘‘CR/RAC Letter.’’ (Ex. 79, JG 000001152). This refers to a mailing about the course sent over Mr. Gingrich’s name by GOPAC to approximately 1,000 College Republicans. (Ex. 81, Mescon 0918, 0915, 0914 and Meeks 0038–0040; 7/15/96 Gaylord Tr. 81–82). b. Involvement of GOPAC charter members in course design As discussed earlier, Mr. Gingrich had a meeting with GOPAC Charter Members in January 1993 to discuss the ideas of Renewing American Civilization. (11/13/96 Gingrich Tr. 132). According to a letter written about that meeting, the idea to teach arose from that meeting. In April 1993, GOPAC held its semi-annual Charter Meeting. Its theme was ‘‘Renewing American Civilization.’’ (Ex. 78, PP000448–PP000452). Mr. Gingrich gave the keynote address, entitled ‘‘Renewing American Civilization,’’ and there were five breakout sessions entitled ‘‘Advancing the Five Pillars of Twenty-first Century Democracy.’’ (Ex. 78, PP000449). Each of the breakout sessions was named for a lecture in the course, and these sessions were used to help develop the content of the course (11/13/96 Gingrich Tr. 164–165; 7/17/96 Gingrich Tr. 69–70; 7/12/96 Eisenach Tr. 144–146; 7/15/96 Gaylord Tr. 46) as well as GOPAC’s political message for the 1993 legislative campaigns and the 1994 congressional races. (11/13/96 Gingrich Tr. 164–165; Ex. 62, Eisenach 2540). As stated in a memorandum from Mr. Eisenach to GOPAC Charter Members, these breakout sessions were intended to ‘‘dramatically improve both our understanding of the subject and our ability to communicate it.’’ (Ex. 82, Roberts 0045–0048). c. Letters sent by GOPAC In June of 1993, GOPAC sent a letter over Mr. Gingrich’s signature stating that ‘‘it is vital for Republicans to now DEVELOP and put forward OUR agenda for America.’’ (Ex. 83, PP000534) (emphasis in original). In discussing an enclosed survey the letter states: It is the opening step in what I want to be an unprecedented mobilization effort for Republicans to begin the process of replacing America’s failed welfare state. 36 And the key political component of that effort will be an all-out drive to end the Democrat’s 40 year control of the U.S. House or Representatives in 1994! (Ex. 83, PP000535).36 The letter then states that it is important to develop the themes and ideas that will be needed to accomplish the victory in 1994. (Ex. 83, PP000536). In language that is very similar to the core of the course, but with an overtly partisan aspect added to it, the letter states: Personally, I believe we can and should turn the 1994 midterm elections into not just a referendum on President Clinton, but on whether we maintain or replace the welfare state and the Democratic Party which supports it. I believe the welfare state which the Democrats have created has failed. In fact, I challenge anyone to say that it has succeeded, when today in America twelve year olds are having children, fifteen year olds are killing each other, seventeen year olds are dying of AIDS and eighteen year olds are being given high school diplomas they cannot even read. * * * * * * * And what I want to see our Party work to replace it with is a plan to renew America based on what I call ‘‘pillars’’ of freedom and progress: (1) Personal strength; (2) A commitment to quality in the workplace; (3) Spirit of American Inventiveness; (4) Entrepreneurial free enterprise applied to both the private and public sectors; (5) Applying the lessons of American history as to what works for Americans to proposed government solutions to our problems. After being active in politics for thirty years, and being in Congress for fourteen of them, I firmly believe these five principles can develop a revolutionary change in government. Properly applied, they can dramatically improve safety, health, education, job creation, the environment, the family and our national defense. (Ex. 83, PP000536). In other letters sent out by GOPAC, the role of the Renewing American Civilization course in relation to the Republican political goals of GOPAC were described in explicit terms. A letter to Neil Gagnon, dated May 5, 1993, over Mr. Gingrich’s name, states: As we discussed, it is time to lay down a blue print— which is why in part I am teaching the course on Renewing American Civilization. Hopefully, it will provide the structure to build an offense so that Republicans can break through dramatically in 1996. We have a good chance to make significant gains in 1994, but only if we can reach 36 The copy of the letter produced is a draft. While Mr. Gingrich was not able to specifically identify the letter, he did state that the letter fit the message and represented the major theme of GOPAC at that time. (7/17/96 Gingrich Tr. 60–61). 37 the point where we are united behind a positive message, as well as a critique of the Clinton program.37 (Ex. 84, GOPAC2 0003). In a letter dated June 21, 1993, that Pamla Prochnow, GOPAC’s new finance director, sent to Charter Members as a follow-up to an earlier letter from Mr. Gingrich, she states: As the new finance director, I want to introduce myself and to assure you of my commitment and enthusiasm to the recruitment and training of grassroots Republican candidates. In addition, with the course Newt will be teaching in the fall—Renewing American Civilization—I see a very real opportunity to educate the American voting population to Republican ideals, increasing our opportunity to win local, state and Congressional seats.38 (Ex. 85, PP000194). On January 3, 1994, Ms. Prochnow sent another letter to the Charter Members. It states: As we begin the new year, we know our goals and have in place the winning strategies. The primary mission is to elect Republicans at the local, state and congressional level. There, also, is the strong emphasis on broadcasting the message of renewing American civilization to achieve peace and prosperity in this country. (Ex. 86, PP000866). In another letter sent over Mr. Gingrich’s name, the course is again discussed. The letter, dated May 12, 1994, is addressed to Marc Bergschneider and states: I am encouraged by your understanding that the welfare state cannot merely be repaired, but must be replaced and have made a goal of activating at least 200,000 citizen activists nationwide through my course, Renewing American Civilization. We hope to educate people with the fact that we are entering the information society. In order to make sense of this society, we must rebuild an opportunistic country. In essence, if we can reach Americans through my course, independent expenditures, GOPAC and other strategies, we just might unseat the Democratic majority in the House in 1994 and make government accountable again. (Ex. 87, GDC 01137). Current and former GOPAC employees said that before a letter would go out over Mr. Gingrich’s signature, it would be approved by him. (7/3/96 Rogers Tr. 88; 6/27/96 Nelson Tr. 56–60). According to Mr. Eisenach, Mr. Gingrich ‘‘typically’’ reviewed letters that went out over his signature, but did not sign all letters that were part of a mass mailing. (7/12/96 Eisenach Tr. 35). With respect to 37 Jana Rogers had not seen this letter before her interview, but after reading it she said that through her work on the course, she believed the contents of the letter set out one of the goals of the Renewing American Civilization course. (7/3/96 Rogers Tr. 75–76). 38 Both Dr. Mescon and Dr. Siegel of KSC were shown some of these letters. They both said that had they known of this intention in regard to the course, they would not have viewed it as an appropriate project for KSC. (6/13/96 Mescon Tr. 84–87; 6/13/96 Siegel Tr. 60–62). 38 letters sent to individuals over Mr. Gingrich’s name, Mr. Eisenach said the following: Mr. Eisenach: [Mr. Gingrich] would either review those personally or be generally aware of the content. In other words, on rare, if any, occasions, did I or anybody else invent the idea of sending a letter to somebody, write the letter, send it under Newt’s signature and never check with him to see whether he wanted the letter to go. There were occasions—now, sometimes that would be— Newt and I would discuss the generic need for a letter. I would write the letter and send it and fax a copy to him and make sure he knew that it had been sent. Mr. Cole: Would you generally review the contents of the letter with him prior to it going out? Mr. Eisenach: Not necessarily word for word. It would depend. But as a general matter, yes. (7/12/96 Eisenach Tr. 36). Mr. Gingrich’s Administrative Assistant, Rachel Robinson, stated that in 1993 and 1994 whenever she received a letter or other document for Mr. Gingrich that was to be filed, she would sign Mr. Gingrich’s name on the document and place her initials on it. This ‘‘usually’’ meant that Mr. Gingrich had seen the letter. (9/6/96 Robinson Tr. 4). The letter sent to Mr. Bergschneider on May 12, 1994, was produced from the files of Mr. Gingrich’s Washington, D.C. office and has Ms. Robinson’s initials on it. (9/6/96 Robinson Tr. 4). The letters sent out over Mr. Gingrich’s signature were shown to Mr. Gingrich during an interview. He said that none of them contained his signature, he did not recall seeing them prior to the interview, and said he would not have written them in the language used. (7/17/96 Gingrich Tr. 77–78, 140–141). Mr. Gaylord said that ‘‘it seemed to [him] there was a whole series of kind of usual correspondence that was done by the staff’’ that Mr. Gingrich would not see. (7/15/96 Gaylord Tr. 77). The content of the letters listed above, however, are quite similar to statements made directly by Mr. Gingrich about the movement and the role of the course in the movement. (See, e.g., Ex. 47, JR–0000646 (‘‘emphasis is on the Republican Party as the primary vehicle for renewing American civilization’’); Ex. 52 GDC 10639–10649 (‘‘sweeping victory’’ will be accomplished through the course); Ex. 88, GDC 10729–10733 (‘‘Democrats are the party of the welfare state.’’ ‘‘Only by voting Republican can the welfare state be replaced and an opportunity society be created.’’)) D. ‘‘Replacing the Welfare State With an Opportunity Society’’ as a Political Tool According to Mr. Gingrich, the main theme of both the Renewing American Civilization movement and the course was the replacement of the welfare state with an opportunity society. (7/17/96 Gingrich Tr. 52, 61, 170; 11/13/96 Gingrich Tr. 85). Mr. Gingrich also said, ‘‘I believe that to replace the welfare state you almost certainly had to have a [R]epublican majority.’’ (7/17/96 Gingrich Tr. 51). ‘‘I think it’s hard to replace the welfare state with the [D]emocrats in charge.’’ (7/17/96 Gingrich Tr. 62). The course was 39 designed to communicate the vision and language of the Renewing American Civilization movement and ‘‘was seen as a tool that could be used to replace the welfare state.’’ (7/17/96 Gingrich Tr. 159– 160; see also 11/13/96 Gingrich Tr. 47, 76).39 In addition to being the title of a movement, the course, and GOPAC’s political message for 1993 and 1994, ‘‘Renewing American Civilization’’ was also the main message of virtually every political and campaign speech made by Mr. Gingrich in 1993 and 1994. (7/17/96 Gingrich Tr. 69).40 According to Mr. Gingrich, there was an effort in 1994 to use the ‘‘welfare state’’ label as a campaign tool against the Democrats and to use the ‘‘opportunity society’’ label as an identification for the Republicans. (7/17/96 Gingrich Tr. 113). Mr. Gingrich made similar comments in a subsequent interview: Mr. Cole: During [1993–1994] was there an effort to connect the Democrats with the welfare state? Mr. Gingrich: Absolutely; routinely and repetitively. Mr. Cole: And a campaign use of that? Mr. Gingrich: Absolutely. Mr. Cole: A partisan use, if you will? Mr. Gingrich: Absolutely. Mr. Cole: And was there an effort to connect the Republicans with the opportunity society? Mr. Gingrich: Absolutely. Mr. Cole: A partisan use? Mr. Gingrich: Yes, sir. Mr. Cole: And that was the main theme of the course, was it not, replacement of the welfare state with the opportunity society? Mr. Gingrich: No. The main theme of the course is renewing American civilization and the main subset is that you have—that you have to replace the welfare state with an opportunity society for that to happen. (11/13/96 Gingrich Tr. 79–80). As referred to above, Mr. Gingrich held a training seminar for candidates on behalf of GOPAC at the Virginia Republican Convention in June 1993. (7/15/96 Gaylord Tr. 29–30). He gave a speech entitled ‘‘Renewing American Civilization’’ which described the nature of the movement and the course. (Ex. 56, GOPAC2 2146– 2209). Near the beginning of his speech, Mr. Gingrich said: What I first want to suggest to you [is] my personal belief that we are engaged in a great moral and practical ef39 During his interview, the following exchange occurred regarding the movement: Mr. Cole: Yet there was an emphasis in the movement on the Republican Party? Mr. Gingrich: There certainly was on my part, yes. Mr. Cole: You were at the head of the movement, were you not? Mr. Gingrich: Well, I was the guy trying to create it. Mr. Cole: The course was used as the tool to communicate the message of the movement, was it not? Mr. Gingrich: Yes, it was a tool, yes. (11/13/96 Gingrich Tr. 76). 40 According to Ms. Rogers, the course’s Site Host Coordinator, there was coordination between the message, the movement, and activists. ‘‘They were extensions of Newt and each had to make—each group had to make sure—what I mean specifically is GOPAC and the class had to make sure that they were using the same message that Newt was trying to disseminate, that it was identical. (7/3/96 Rogers Tr. 54). 40 fort, that we are committed to renewing American civilization, and I believe that’s our battle cry. That we want to be the party and the movement that renews American civilization and that renewing American civilization is both an idealistic cause and a practical cause at the same time. (Ex. 56, GOPAC2 2146). He then told the audience that he has four propositions with which 80% to 95% of Americans will agree. These are: (1) there is an American civilization; (2) the four can’ts; (3) the welfare state has failed; and (4) to renew American civilization it is necessary to replace the welfare state. (Ex. 56, GOPAC2 2149–2153). 41 Mr. Gingrich then went on to relate the principles of renewing American civilization to the Republican party: We can’t do much about the Democrats. They went too far to the left. They are still too far to the left. That’s their problem. But we have a huge burden of responsibility to change our behavior so that every one who wants to replace the welfare state and every one who wants to renew American civilization has a home, and it’s called being Republican. We have to really learn how to bring them all in. And I think the first step of all that is to insist that at the core of identification the only division that matters is that question. You want to replace the welfare state and renew American civilization. The answer is just fine, come and join us. And not allow the news media, not allow the Democrats, not allow interest groups to force us into fights below that level in terms of defining who we are. That in any general election or any effort to govern that we are every one who is willing to try to replace the welfare state, and we are every one who is willing to renew American civilization. Now, that means there is a lot of ground in there to argue about details. Exactly how do you replace the welfare state. Exactly which idea is the best idea. But if we accept every one coming in, we strongly change the dynamics of exactly how this country is governed and we begin to create a majority Republican party that will frankly just inexorably crow[d] out the Democrats and turn them into minority status. (Ex. 56, GOPAC2 2155–2156). Mr. Gingrich told the audience that he would discuss three areas in his remarks: (1) the principles of renewing American civilization; (2) the principles and skills necessary to be a ‘‘renewing candidate’’ and then ultimately a ‘‘renewing incumbent;’’ and (3) the concept and principles for creating a community among those who are committed to replacing the welfare state and renewing American civilization. (Ex. 56, GOPAC2 2168). In speaking of the first area, Mr. Gingrich said that it is a very complicated subject. Because of this he was only going to give a ‘‘smattering’’ of an outline at the training seminar. (Ex. 56, GOPAC2 2170). He said, however, that in the 41 These four propositions were used as the ‘‘central propositions’’ or ‘‘heart’’ of the course to introduce each session in 1993 and 1994. (Ex. 54, PFF 18361, 18365–18367). 41 fall he planned to teach a twenty-hour course on the subject, and then refine it and teach it again over a four-year period. (Ex. 56, GOPAC2 2170). He then described the three goals he had for the course: First, we want to have by April of ’96 a genuine intellectual blueprint to replace the welfare state that you could look at as a citizen and say, yeah, that has a pretty good chance of working. That’s dramatically better than what we’ve been doing. Second, we want to find 200,000 activist citizens, and I hope all of you will be part of this, committed at every level of American life to replacing the welfare state. Because America is a huge decentralized country. You’ve got to have school boards, city councils, hospital boards, state legislatures, county commissioners, mayors, and you’ve got to have congressmen and senators and the President and governors, who literally [sic] you take all the elected posts in America and then you take all the people necessary to run for those posts and to help the campaigns, etc., I think it takes around 200,000 team players to truly change America. (Ex. 56, GOPAC2 2170–2171). Third, we create a process—and this is something you can all help with in your own districts—we create a process interesting enough that the national news media has to actually look at the material in order to cover the course.42 (Ex. 56, GOPAC2 2173). The transcript of his speech goes on for the next 30 pages to describe the five pillars of American civilization that form the basis of the course, and how to use them to get supporters for the candidates’ campaigns. In discussing this Mr. Gingrich said: Now, let me start just as [a] quick overview. First, as I said earlier, American civilization is a civilization. Very important. It is impossible for anyone on the left to debate you on that topic. * * * * * * * But the reason I say that is if you go out and you campaign on behalf of American civilization and you want to renew American civilization, it is linguistically impossible to oppose you. And how is your opponent going to get up and say I’m against American civilization? (Ex. 56, GOPAC2 2175–2176). Near the end of the speech he said: I believe, if you take the five pillars I’ve described, if you find the three areas that will really fit you, and are really in a position to help you, that you are then going to have a language to explain renewing American civilization, a 42 These are the same three specific goals that were listed in the document entitled ‘‘Renewing American Civilization Our Goal’’ that referred to achieving a ‘‘sweeping victory in 1996’’ as the overall goal. (Ex. 52, GDC 10647–10648). 42 language to explain how to replace the welfare state, and three topics that are going to arouse volunteers and arouse contributions and help people say, Yes, I want this done. (Ex. 56, GOPAC2 2207).43 In a document that Mr. Gingrich apparently wrote during this time (Ex. 89, Eisenach 2868–2869), the course is related to the Renewing American Civilization movement in terms of winning a Republican majority. The ‘‘House Republican Focus for 1994’’ is directed at having Republicans communicate a positive message so that a majority of Americans will conclude that their only hope for real change is to vote Republican. In describing that message, the document states: The Republican party can offer a better life for virtually every one if it applies the principles of American civilization to create a more flexible, decentralized market oriented system that uses the Third Wave of change and accepts the disciplines of the world market. These ideas are outlined in a 20 hour intellectual framework ‘‘Renewing American Civilization’’ available on National Empowerment Television every Wednesday from 1 pm to 3 pm and available on audio tape and video tape from 1–800–TO–RENEW. (Ex. 89, Eisenach 2869). In a document dated March 21, 1994, and entitled ‘‘RENEWING AMERICA: The Challenge for Our Generation,’’ 44 Mr. Gingrich described a relationship between the course and the movement. (Ex. 90, GDC 00132–00152). Near the beginning of the document, one of the ‘‘key propositions’’ listed is that the welfare state has failed and must be replaced with an opportunity society. (Ex. 90, GDC 00136). The opportunity society must be based on, among other things, the principles of American civilization. (Ex. 90, GDC 00136). The document states that the key ingredient for success is a movement to renew American civilization by replacing the welfare state with an opportunity society. (Ex. 90, GDC 00137). That movement will require at least 200,000 ‘‘partners for progress’’ committed to the goal of replacing the welfare state with an opportunity society and willing to study the principles of American civilization, work on campaigns, run for office, and engage in other activities to further the movement. (Ex. 90, GDC 00138).45 Under the heading ‘‘Learning the Principles of American Civilization’’ the document states, ‘‘The course, ‘Renewing American Civilization’, is designed as a 20 hour introduction to the principles necessary to replace the welfare state with an opportunity society.’’ (Ex. 90, GDC 00139). It then lists the titles of each class and the book of readings associated with the course. The next section is titled ‘‘Connecting the ‘Partners’ to the ‘Principles’.’’ (Ex. 90, GDC 00140). It de43 As discussed above, this speech was used by GOPAC to produce two training tapes. One was called ‘‘Renewing American Civilization’’ and the other was called ‘‘Leading the Majority.’’ (7/15/96 Gaylord Tr. 31). 44 Mr. Gingrich at least wrote the first draft of this document and stated that it was compatible with what he was doing at that time. It was probably a briefing paper for the House Republican members. (Ex. 90, GDC 00132–00152; 7/17/96 Gingrich Tr. 203–204). 45 In this section he defines the ‘‘partners for progress’’ as ‘‘citizens activists.’’ 43 scribes where the course is being taught, including that it is being offered five times during 1994 on National Empowerment Television, and states that, ‘‘Our goal is to get every potential partner for progress to take the course and study the principles.’’ (Ex. 90, GDC 00140).46 The document then lists a number of areas where Republicans can commit themselves to ‘‘real change,’’ including the Contract with America and a concerted effort to end the Democratic majority in the House. (Ex. 90, GDC 00144–00150). A May 10, 1994 document which Mr. Gingrich drafted (7/18/96 Gingrich Tr. 234–235; 7/15/96 Gaylord Tr. 70) entitled ‘‘The 14 Steps[:] Renewing American Civilization by replacing the welfare state with an opportunity society,’’ he notes the relationship between the course and the partisan aspects of the movement. (Ex. 88, GDC 10729–10733). After stating that the welfare state has failed and needs to be replaced (Ex. 88, GDC 10729), the document states that, ‘‘Replacing the welfare state will require a disciplined approach to both public policy and politics.’’ (Ex. 88, GDC 10730). ‘‘We must methodically focus on communicating and implementing our vision of replacing the welfare state.’’ (Ex. 88, GDC 10730). In describing the replacement that will be needed, Mr. Gingrich says that it: must be an opportunity society based on the principles of American civilization * * *. These principles each receive two hours of introduction in ‘Renewing American Civilization’, a course taught at Reinhardt College. The course is available on National Empowerment Television from 1–3 P.M. every Wednesday and by videotape or audiotape by calling 1–800–TO– RENEW. (Ex. 88, GDC 10730). This document goes on to describe the 200,000 ‘‘partners for progress’’ as being necessary for the replacement of the welfare state and how the Contract with America will be a first step toward replacing the welfare state with an opportunity society. (Ex. 88, GDC 10731). The document then states: The Democrats are the party of the welfare state. Too many years in office have led to arrogance of power and to continuing violations of the basic values of self-government. Only by voting Republican can the welfare state be replaced and an opportunity society be created. (Ex. 88, GDC 10731). On November 1, 1994, Mr. Gingrich attended a meeting with Ms. Minnix, his co-teacher at Reinhardt, to discuss the teaching of the course in 1995. (Ex. 92, Reinhardt 0063–0065). Also at that meeting were Mr. Hanser, Ms. Desmond, Mr. Eisenach, and John McDowell. One of the topics discussed at the meeting was Mr. Gingrich’s desire to teach the course on a second day in Washing46 The course was broadcast twice each week on National Empowerment Television. In light of it being a ten-week course, and being offered five times during 1994 on NET, it ran for 50 weeks during this election year. In addition to being on NET, it was also on a local cable channel in Mr. Gingrich’s district in Georgia. (Ex. 91, DES 01048; 7/18/96 Gingrich Tr. 257–259). 44 ton, D.C. According to notes of the meeting prepared by Ms. Minnix, Mr. Gingrich wanted to teach the course in D.C. in an effort: to attract freshman congresspeople, the press—who will be trying to figure out the Republican agenda—and congressional staff looking for the basis of Republican doctrine. ‘Take the course’ will be suggested to those who wonder what a Republican government is going to stand for. (Ex. 92, Reinhardt 0064).47 Later in the meeting Mr. Gingrich said that his chances of becoming Speaker were greater than 50 percent and he was making plans for a transition from Democratic to Republican rule. Ms. Minnix wrote that Mr. Gingrich ‘‘sees the course as vital to this—so vital that no one could convince him to teach it only one time per week and conserve his energy.’’ (Ex. 92, Reinhardt 0065).48 A number of other documents reflect a similar partisan, political use of the message and theme of Renewing American Civilization. (Ex. 93, LIP 00602–00610, (‘‘Renewing American Civilization: Our Duty in 1994,’’ a speech given to the Republican National Committee January 21, 1994 Winter Breakfast); Ex. 94, GDC 11010– 11012, (‘‘Whip Office Plan for 1994’’ with the ‘‘vision’’ of ‘‘Renew American civilization by replacing the welfare state which requires the election of a Republican majority and passage of our agenda’’); Ex. 95, GDC 10667–10670, (‘‘Planning Assumptions for 1994’’); Ex. 96, Eisenach 2758–2777, (untitled); Ex. 97, PFF 2479–2489, (seminar on Renewing American Civilization given to the American Legislative Exchange Council); Ex. 98, PFF 37179–37188, (‘‘House GOP Freshman Orientation: Leadership for America’s 21st Century.’’)) E. Renewing American Civilization House Working Group As stated in Mr. Gingrich’s easel notes from December 1992, one goal of the Renewing American Civilization movement was to ‘‘professionalize’’ the House Republicans. (Ex. 42, HAN 02110). His intention was to use the message of Renewing American Civilization to ‘‘attract voters, resources and candidates’’ and to develop a ‘‘mechanism for winning seats.’’ (Ex. 42, HAN 02110). In this vein, a group of Republican House Members and others formed a working group to promote the message of Renewing American Civilization. Starting in approximately June 1993, Mr. Gingrich sponsored Representative Pete Hoekstra as the leader of this group and worked with him. (7/18/96 Gingrich Tr. 279).49 According to a num47 Ms. Minnix stated that the word ‘‘Republican’’ may not have been specifically used by Mr. Gingrich, but that it was the context of his remark. (6/12/96 Minnix Tr. 54–56). 48 The other participants at this meeting were asked about this conversation. To the extent they recalled the discussion, they confirmed that it was as related in Ms. Minnix’s memorandum. No one had a recollection that was contrary to Ms. Minnix’s memorandum. (6/12/96 Minnix Tr. 54–56; 6/28/96 Hanser Tr. 71–72; 6/13/96 Desmond Tr. 76–78; 7/12/96 Eisenach Tr. 270–271; 7/17/96 Gingrich Tr. 211–215). 49 Mr. Gingrich provided Mr. Hoekstra with some materials to explain the movement. (See Ex. 99, Hoekstra 0259). Apparently, this material included the May 13, 1993, three part document entitled ‘‘Renewing America Vision,’’ ‘‘Renewing America Strategies,’’ and ‘‘Renewing American Civilization Our Goal.’’ (Ex. 52, GDC 10639–10649). In a memorandum from one of Mr. Hoekstra’s staffers analyzing the material, he lists the thirteen items that were to be done to further the movement. (Ex. 100, Hoekstra 0140b). They are the same thirteen items that are listed in the ‘‘Renewing America Strategies’’ portion of the May 13, 1993 document. 45 ber of documents associated with this group, a goal was to use the theme of renewing American civilization to elect a Republican majority in the House. (Ex. 99, Hoekstra 0259; Ex. 101, Hoekstra 0264; Ex. 102, Gregorsky 0025). According to notes from a July 23, 1993 meeting, Mr. Gingrich addressed the group and made several points: 1. Renewing American Civilization (RAC) is the basic theme; 2. RAC begins with replacing the welfare state, not improving it; 3. RAC will occur by promoting the use of the five pillars of American civilization; 4. Use of the three key policy areas of saving the inner city, health, and economic growth and jobs. (Ex. 101, Hoekstra 0264). The meeting then turned to a discussion of possible ways to improve these points. (Ex. 101, Hoekstra 0264). On July 30, 1993, another meeting of this group was held. According to notes of that meeting, the group restated its objectives as follows: a. restate our objective: Renewing American Civilization by replacing the paternalistic welfare state —GOP majority in the House ASAP —nationwide GOP majority ASAP * * * * * * * —objective: create ‘‘echo chamber’’ for RAC * * * * * * * i. develop RAC with an eye toward marketability * * * * * * * ii. promote message so that this defines many 1994 electoral contests at the congressional level and below, and defines the 1996 national election. (Ex. 102, Gregorsky 0025).50 The goal of the group was further defined in a memorandum written by one of Mr. Hoekstra’s staffers in September of 1993. (Ex. 103, Hoekstra 0266–0267). In that memorandum, the staff member said the group’s goal had changed ‘‘from one of promoting the Renewing American Civilization course to one of proposing a ‘political platform’ around which House Republican incumbents and candidates can rally.’’ (Ex. 103, Hoekstra 0266). The group’s ‘‘underlying perspective’’ was described as follows: To expand our party, it is important that Republicans develop, agree on and learn to explain a positive philosophy of government. At the core of that philosophy is the observation that the paternalistic welfare state has failed, and must be replaced by alternative mechanisms within and outside of government if social objectives are to be achieved. 50 Mr. Gingrich reviewed notes similar to these and though he did not specifically recall them, he said they were compatible with the activities of that time. (7/18/96 Gingrich Tr. 283–284). 46 Fundamental to developing a new philosophy is the idea that traditions in American civilization have proven themselves to be powerful mechanisms for organizing human behavior. There are working principles in the lessons of American history that can be observed, and should be preserved and strengthened. These working principles distinguish the Republican party and its beliefs from the Democratic party, which remains committed to the welfare state even though these policies are essentially alien to the American experience. (Ex. 103, Hoekstra 0266–0267). This group began to develop a program to incorporate Renewing American Civilization into the House Republican party. The program’s goals included a House Republican majority, Mr. Gingrich as Speaker, and Republican Committee Chairs. (Ex. 104, Hoekstra 0147–0151). To accomplish this goal, there were efforts to have candidates, staffers and members use Renewing American Civilization as their theme. (Ex. 104, Hoekstra 0148). One proposal in this area was a training program for staffers in the principles of Renewing American Civilization for use in their work in the House. (Ex. 104, Hoekstra 0148). A memorandum from Mr. Gingrich to various members of his staffs 51 asked them to review a plan for this training program and give him their comments. (Ex. 105, WGC 03732– 03745). During his interview, Mr. Hoekstra stated that Renewing American Civilization and the concept of replacing the welfare state was intended as a means of defining who Republicans were; however, the group never finalized this as a project. (7/29/96 Hoekstra Tr. 47–48). In talking about this group, Mr. Gingrich said that he wanted the Republican party to move toward Renewing American Civilization as a theme and that he would have asked the group to study the course, understand the ideas, and use those ideas in their work. (7/18/96 Gingrich Tr. 284–286). It is not known what became of this group. Mr. Hoekstra said that the project ended without any closure, but he does not recall how that happened. (7/ 29/96 Hoekstra Tr. 46). F. Marketing of the Course As discussed above, Mr. Gingrich wrote in his March 29, 1993 memorandum that he wanted ‘‘Republican activists committed * * * to setting up workshops built around the course, and to opening the party up to every citizen who wants to renew American civilization.’’ (Ex. 51, GDC 08892). There is evidence of efforts being made to recruit Republican and conservative organizations into becoming sponsors for the course. These sponsors were known as ‘‘site hosts.’’ One of the responsibilities of a site host was to recruit participants. (Ex. 106, PFF 8033). Jana Rogers was the Site Host Coordinator for the course when it was at Kennesaw State College. She stated that part of her work in regard to the course involved getting Republican activists to set up workshops around the course to bring people into the Republican party. (7/3/96 Rogers 51 This included his congressional office, his WHIP office, RAC, and GOPAC. 47 Tr. 67–68). She said there was an emphasis on getting Republicans to be site hosts. (7/3/96 Rogers Tr. 69). In an undated document entitled ‘‘VISION: To Obtain Site Hosts for Winter 1994 Quarter,’’ three ‘‘projects’’ are listed: (1) ‘‘To obtain site hosts from conservative organizations;’’ (2) ‘‘To secure site hosts from companies;’’ (3) ‘‘To get cable companies to broadcast course.’’ (Ex. 107, PFF 7526). The ‘‘strategies’’ listed to accomplish the ‘‘project’’ of obtaining site hosts from conservative organizations are listed as: Mailing to State and local leaders through lists from National Republican Committee, Christian Coalition, American Association of Christian Schools, U.S. Chamber of Commerce, National Right to Life, Heritage Foundation, Empower America, National Empowerment Television, Free Congress, etc. (Ex. 107, PFF 7526). One of the tactics listed to accomplish the goal of obtaining more site hosts is to: Contact National College Republican office to obtain names and addresses of all presidents country-wide. Develop letter to ask college republicans to try to obtain the class for credit on their campus or to become a site host with a sponsor group. Also, ask them to contact RAC office for a site host guide and additional information. (Ex. 107, PFF 7527). In a memorandum written by Nancy Desmond concerning the course, among the areas where she suggested site host recruiting should be directed were to ‘‘NAS members,’’ 52 ‘‘schools recognized as conservative’’ and ‘‘national headquarters of conservative groups.’’ (Ex. 108, PFF 37328–37330). In a number of the project reports written by employees of the course in 1993, there are notations about contacts with various Republicans in an effort to have them host a site for the course. There are no similar notations of efforts to contact Democrats. (Ex. 109, Multiple Documents). 53 In several instances mailings were made to Republican or conservative activists or organizations in an effort to recruit them as site hosts. In May of 1993 a letter was sent over Mr. Gingrich’s signature to approximately 1,000 College Republicans regarding the course. 54 That letter states that: [C]onservatives today face a challenge larger than stopping President Clinton. 52 According to Mr. Gingrich, the NAS (National Association of Scholars) is a conservative organization. (7/18/96 Gingrich Tr. 345–346). 53 Mr. DuGally said that he made an effort to contact the Young Democrats, but they did not show any interest. (7/19/96 DuGally Tr. 31–32). 54 Mr. Gingrich was shown this letter and he said that while he was not familiar with it, nothing in it was particularly new. (7/17/96 Gingrich Tr. 87). Jeff Eisenach, GOPAC’s Executive Director and then the coordinator of the course, either wrote the letter or edited it from a draft written by another GOPAC employee. (7/12/96 Eisenach Tr. 200–201). 48 ‘‘Renewing American Civilization.’’ I am writing to you today to ask you to enroll for the class, and to organize a seminar so that your friends can enroll as well. * * * * * * * Let me be clear: This is not about politics as such. But I believe the ground we will cover is essential for anyone who hopes to be involved in politics over the next several decades to understand. American civilization is, after all, the cultural glue that holds us all together. Unless we can understand it, renew it and extend it into the next century, we will never succeed in replacing the Welfare State with an Opportunity Society. * * * * * * * (Ex. 81, Mescon 0915; Meeks 0039). The letter ends by stating: I have devoted my life to teaching and acting out a set of values and principles. As a fellow Republican, I know you share those values. This class will help us all remember what we’re about and why it is so essential that we prevail. Please join me this Fall for ‘‘Renewing American Civilization.’’ (Ex. 81, Mescon 0914; Meeks 0040). GOPAC paid for this mailing (7/12/96 Eisenach Tr. 200; 7/15/96 Gaylord Tr. 82) and it was listed as a ‘‘political’’ project on GOPAC’s description of its ‘‘Major Projects Underway’’ for May 7, 1993. (Ex. 79, JG 000001152). At the top of a copy of the letter to the College Republicans is a handwritten notation to Mr. Gingrich from Mr. Eisenach: ‘‘Newt, Drops to 1000+ C.R. Chapters on Wednesday. JE cc: Tim Mescon.’’ (Ex. 81, Mescon 0915, Meeks 0039). During an interview with Mr. Cole, Mr. Eisenach was asked about this letter. Mr. Eisenach: Use of the course by political institutions in a political context was something that occurred and was part of Newt’s intent and was part of the intent of other partisan organizations, but the intent of the course and, most importantly, the operation of the course and its use of tax-exempt funds was always and explicitly done in a nonpartisan way. Political organizations—in this case, GOPAC—found it to their advantage to utilize the course for a political purpose, and they did so. Mr. Cole: Were you involved in GOPAC? Mr. Eisenach: At this time I was involved in GOPAC, yes. Mr. Cole: And in making the decision that GOPAC would utilize the course? Mr. Eisenach: Yes. (7/12/96 Eisenach Tr. 203). Mr. DuGally worked with Economics America, Inc. to have them send a letter to the members of the groups listed in The Right Guide as part of an effort to recruit them as site hosts. The first paragraph of the letter states: 49 Newt Gingrich asked that I tell the organizations listed in The Right Guide about his new nationally broadcast college course, ‘‘Renewing American Civilization.’’ It promises to be an important event for all conservatives, as well as many young people who are not yet conservatives. You and your organization can be part of this project. (Ex. 110, PFF 19821). The letter goes on to say, ‘‘And remember, since you are a team teacher you can use the course to explain and discuss your views.’’ (Ex. 110, PFF 19821). In the fall of 1993, Mr. DuGally arranged for a letter to be sent by Lamar Alexander on behalf of the Republican Satellite Exchange Network promoting the course and asking its members to serve as site hosts. (Ex. 111, PFF 19795–19798). In addition, a letter was prepared for mailing to all chairmen of the Christian Coalition asking them to serve as site hosts. (Ex. 112, PFF 19815). In June of 1993, Mr. DuGally worked with the Republican National Committee to have a letter sent by Chairman Haley Barbour to RNC Members informing them of the course. (Ex. 113, RNC 0094). This letter did not solicit people to be site hosts. Jana Rogers, the Site Host Coordinator for the course, attended the College Republican National Convention. Her weekly report on the subject said the following: The response to Renewing American Civilization at the College Republican National Convention was overwelming [sic]. In addition to recruiting 22 sites and possibly another 30+ during follow-up, I was interviewed by MTV about the class and learned more about RESN [Republican Exchange Satellite Network] from Stephanie Fitzgerald who does their site coordination. I also handed out 400 Site Host Guides to College Republicans and about 600 registration flyers. NCRNC says it will work aggressively with their state chairmen to help us set up sites know [sic] that the convention is over. (Ex. 114, PFF 7613). She made no effort to contact any Democratic groups. (7/3/96 Rogers Tr. 78). In notes provided by Mr. Mescon from a meeting he attended on the course, he lists a number of groups that would be targeted for mailings on the course. They include mostly elected or party officials and the notation ends with the words ‘‘25,000/total Republican mailing.’’ (Ex. 115, Mescon 0263). According to Mr. Mescon, the course was being marketed to Republicans as a target audience and he knew of no comparable mailing to Democrats. (6/13/96 Mescon Tr. 112–113). 55 In an August 11, 1993, memorandum from Mr. DuGally, a WPG employee who worked on the course, he lists the entities where mailings for the course had been sent or were intended to be sent up to that point. They are as follows: 1. GOPAC farm team—9,000 2. Cong/FONG/Whip offices—4,000 55 Others who worked on the course also said it was marketed to Republican and conservative groups. (7/3/96 Rogers Tr. 62–63; 6/13/96 Stechschulte Tr. 21–22, 57–58; 6/13/96 Desmond Tr. 66). 50 3. Sent to site hosts—5,500 4. College Republicans—2,000 5. American Pol Sci Assoc.—11,000 6. Christian Coalition leadership—3,000 7. The Right Guide list—3,000 (Ex. 116, PFF 19794). In June of 1994, John McDowell wrote to Jeff Eisenach with his suggestions about where to market the course during that summer. The groups he listed were the Eagle Forum Collegians; the National Review Institute’s Conservative Summit; Accuracy in Academia; Young Republicans Leadership Conference (Mr. McDowell was on their Executive Board); Young America’s Foundation, National Conservative Student Conference; College Republican National Conference; the American Political Science Association Annual Meeting; 56 and the Christian Coalition, Road to Victory. (Ex. 117, PFF 3486–3489). At a number of these meetings, Mr. Gingrich was scheduled to be a speaker. (Ex. 117, PFF 3486–3489). A site host listing dated August 18, 1994, identifies the approximately 100 site hosts as of that date. (Ex. 118, PFF 7493–7496). These include businesses, community groups, cable stations, and others. In addition, some colleges offered the course either for credit, partial credit or no credit. (Ex. 119, Reinhardt 0160–0164). Based on their names, it was not possible to determine whether all of the site hosts fell within the goals set forth in the above-described documents. Some of them, however, were identifiable. For example, of the 28 ‘‘community groups’’ listed on the August 18, 1994 ‘‘Site Host Listing,’’ 11 are organizations whose names indicate they are Republican or conservative organizations—Arizona Republican Party; Athens Christian Coalition; Conservative PAC; Henry County Republicans; Houston Young Republicans; Huron County Republican Party; Las Rancheras Republican Women; Louisiana Republican Legislative Delegation; Northern Illinois Conservative Council; Republican Party Headquarters (in Frankfort Kentucky); Suffolk Republican Party. The list does not indicate whether the remaining groups—e.g., the Alabama Family Alliance; the Family Foundation (Kentucky); Leadership North Fulton (Georgia); the North Georgia Forum; Northeast Georgia Forum; the River of Life Family Church (Georgia)—are nonpartisan, Democratic, Republican, liberal or conservative. The list does not contain any organizations explicitly denominated as Democratic organizations. Similarly, it is not clear whether there was a particular political or ideological predominance in the businesses, cable stations and individuals listed.57 Mr. Gingrich said that the efforts to recruit colleges to hold the course had been ‘‘very broad.’’ ‘‘I talked, for example, with the dean 56 This is the only meeting where there is not a suggestion to have a Renewing American Civilization or PFF employee attend personally. Instead, Mr. McDowell apparently only intended to find an attendee who would be willing to pass out Renewing American Civilization materials. 57 Patti Hallstrom, an activist in the Arizona Republican Party, was instrumental in recruiting host sites in Arizona, such as the Arizona Republican Party and various cable television stations. (Ex. 120, PFF 7362). She prepared part of a training manual on how to recruit cable companies as host sites. (Ex. 120, DES 00999–01007). She also provided the Renewing American Civilization project with information about which radio and talk shows in Arizona were the most conservative as possible shows where Mr. Gingrich could appear. She said the more conservative shows would allow for a ‘‘more amenable discussion.’’ (Ex. 120, DES 00262–00264; 6/20/96 Hallstrom Tr. 41–43). 51 of the government school at Harvard. Berkley [sic] actually was offering the course.’’ (7/18/96 Gingrich Tr. 346). The course at Berkeley, however, did not go through the regular faculty review process for new courses, because it was initiated by a student. (7/12/96 Eisenach Tr. 316–317). Such courses were not conducted by a professor, but could be offered on campus for credit if a faculty member sponsored the course and the Dean approved it. The student site host coordinator at Berkeley was named Greg Sikorski. (Ex. 121, JR–0000117). In the June 20, 1994 memorandum from John McDowell to Mr. Eisenach, the following is written under the heading ‘‘College Republican National Conference:’’ ‘‘RAC Atlanta representative to attend and staff a vendor booth. These 1,000 college students represent a good source of future ‘Greg Sikorskis’ * * * in the sense that they can promote RAC on their campus!’’ (Ex. 117, PFF 3488). The faculty sponsor for the student-initiated Renewing American Civilization course was William Muir, a former speechwriter for George Bush. (Ex. 121, JR–0000117). Aside from Mr. Sikorski and Mr. Muir, Mr. Eisenach did not know if the RAC course at Berkeley had any additional university review. (7/12/96 Eisenach Tr. 319). The site host for the Renewing American Civilization course at Harvard was Marty Connors. (Ex. 122, LIP 00232). According to Mr. Gingrich, Marty Connors is a conservative activist. (7/18/96 Gingrich Tr. 266). In a memorandum dated October 13, 1993, from Marty Connors to Lamar Alexander, Newt Gingrich, Ed Rogers, Jeff Eisenach, Paul Weyrich, Mike Baroody, and Bill Harris, he wrote about a ‘‘series of ideas (that included the Renewing American Civilization course) that could have significant consequences in building a new ‘Interactive’ communication system and message for the Republican Party and the conservative movement.’’ (Ex. 123, WGC 06781). He goes on to write that he was working on a project to take the concept of the Republican Exchange Satellite Television, National Empowerment Television and ‘‘Newt Gingrich’s ‘Renewing American Civilization’ lectures and make them ‘‘more interactive and user friendly.’’ (Ex. 123, WGC 06781). The purpose for this is to have a ‘‘far greater ability for ‘participatory’ party building in the immediate future.’’ (Ex. 123, WGC 06781– 06782). He goes on to write, ‘‘Friends, I truly believe the next major political advantage will go to the group that figures out how to use ‘interactive’ communications in building a new Republican coalition.’’ (Ex. 123, WGC 06782).58 G. Kennesaw State College’s Role in the Course Renewing American Civilization was taught at Kennesaw State College (‘‘KSC’’) in 1993. The sponsoring organization for the course was the Kennesaw State College Foundation (‘‘KSCF’’), a 501(c)(3) organization dedicated to promoting projects at KSC. The approximate expenditures for the course at KSC was $300,000. This represented 29–33% of KSCF’s program expenditures for 1993. The 58 This memorandum was faxed to Mr. Gingrich. The fax cover sheet has Mr. Gingrich’s name and the date ‘‘10/15/93’’ on it in his handwriting. As Mr. Gingrich has said, this probably indicates that he had seen this memorandum. (12/98/96 Gingrich Tr. 36–37). 52 funds raised for the course and donated to KSCF were tax-deductible. KSCF had no role in raising funds for the course. (6/13/96 Fleming Tr. 33–36). Mr. Mescon, the course’s co-teacher and Dean of KSC’s Business School, wrote some letters with the help of Ms. Prochnow, GOPAC’s Finance Director (6/13/96 Mescon Tr. 65–68, 71–74; 7/10/96 Prochnow Tr. 58–62, 66; 7/12/96 Eisenach Tr. 69), but most of the fundraising was coordinated by Mr. Eisenach, Ms. Prochnow, and Mr. Gingrich. (7/12/96 Eisenach Tr. 68–71, 84, 97, 99; 7/17/96 Gingrich Tr. 123, 136, 137). The course as offered at KSC was a forty-hour classroom lecture. Twenty hours were taught by Mr. Gingrich and twenty hours were taught by Mr. Mescon. While officials of KSC and KSCF considered the course to include the full forty hours of lecture (6/13/96 Mescon Tr. 38; 6/13/96 Fleming Tr. 23), only the twenty hours taught by Mr. Gingrich were taped and disseminated. (6/13/96 Siegel Tr. 25– 26; 6/13/96 Mescon Tr. 35; 6/13/96 Fleming Tr. 23). The funds raised for the course were primarily used for the dissemination of Mr. Gingrich’s portion of the course to the various site host locations. (6/13/96 Fleming Tr. 22, 24; 6/13/96 Mescon Tr. 55–56). No one at KSC or KSCF had any role in deciding which portions of the course would be taped and disseminated or even knew the reasons for doing it. (6/13/96 Mescon Tr. 36, 44–45, 58–59; 6/13/96 Fleming Tr. 23; 6/13/96 Siegel Tr. 78–79). KSCF did not manage the course. It contracted with Mr. Eisenach’s Washington Policy Group, Inc. (‘‘WPG’’) to manage and raise funds for the course’s development, production and distribution. In return, WPG was paid $8,750 per month. The contract between WPG and KSCF ran from June 1, 1993, through September 30, 1993.59 All funds raised were turned over to KSCF and dedicated exclusively for the use of the Renewing American Civilization course. KSCF’s only role was to act as the banker for the funds for the course and disburse them upon a request from Mr. Mescon. (6/13/96 Fleming Tr. 24–25; 6/13/96 Mescon Tr. 103; Ex. 124, KSF 001269, Mescon 0454, KSF 003804, PFF 16934, KSF 001246). Mr. Mescon did not engage in a detailed review of the bills. He merely reviewed the bills that were provided by Mr. Eisenach or his staff and determined whether the general nature of the bills fell within the parameters of the project of dissemination of the course. (6/13/96 Mescon Tr. 61–63). When the contract between WPG and KSCF ended, the Progress and Freedom Foundation (‘‘PFF’’) assumed the role WPG had with the course at the same rate of compensation. 60 PFF was also a 59 The contract between WPG and KSCF was never signed by KSCF. It was directed to Dr. Mescon, but he was not an authorized agent of KSCF. According to Jeffery Eisenach, President of WPG, even though the contract was not signed, it memorialized the terms of the relationship between WPG and KSCF. (Ex. 41, Mescon 0651–0652; 7/12/96 Eisenach Tr. 42; 11/14/96 Eisenach Tr. 11). 60 Prior to assuming control of the course PFF was tasked with putting together the book of readings that were to be used for the course. This entailed Mr. Eisenach and Mr. Hanser editing the writings of others. Mr. Hanser was paid $5,000 or $10,000 for this work, but Mr. Eisenach was not separately compensated for his role in this. (7/12/96 Eisenach Tr. 68). Mr. Eisenach was president of PFF, WPG, former Executive Director of GOPAC, and advisor to Mr. Gingrich. Mr. Hanser was a close friend, confidant, and at times a congressional employee of Mr. Gingrich. He was also a board member and consultant to GOPAC and a board member and consultant to the Progress and Freedom Foundation. (6/28/96 Hanser Tr. 6–10, 14). He had a substantial role in developing the course. (6/28/96 Hanser Tr. 19–20). 53 501(c)(3) tax exempt organization, but its status as such was not used while the course was at KSC. Mr. Eisenach was the founder and president of PFF. KSCF and KSC had little or no role in supervising the course or its dissemination. Since the course was a ‘‘Special Topics’’ course, it did not need to go through formal approval by a curriculum committee at KSC—it only required Mr. Mescon’s approval. (6/13/96 Siegel Tr. 15–16, 30, 32, 76–77). While Mr. Mescon was given advance copies of Mr. Gingrich’s lectures, he had little input into their content. (6/28/96 Hanser Tr. 22; 6/13/96 Desmond Tr. 63). Mr. Mescon described his role more in terms of having his own 20 hours to put forth any counterpoint or objection to any of the material in Mr. Gingrich’s lectures. (6/13/96 Mescon Tr. 40–41).61. The controversy pertained to objections voiced by KSC faculty to the course on the grounds that it was essentially political. (Ex. 127, KSC 3550–3551, 3541, 3460, 3462). Because of the Board of Regent’s decision and the controversy, it was decided that the course would be moved to a private college. (7/12/96 Eisenach Tr. 47–50).62 H. Reinhardt College’s Role in the Course Reinhardt College was chosen as the new host for the course in part because of its television production facilities. (6/12/96 Falany Tr. 14). The 1994 and 1995 courses took place at Reinhardt. While there, PFF assumed full responsibility for the course. It no longer received payments to run the course. Rather, it paid Reinhardt to 61 The December 8, 1994 letter from Mr. Gingrich to the Committee states that, ‘‘Respected scholars such as James Q. Wilson, Everett Carl Ladd, and Larry Sabato continue to contribute to and review course content.’’ (Ex. 138, p. 3). The same reference to Mr. Wilson’s and Mr. Sabato’s review of the course is contained in a September 3, 1993 memorandum sent out over Jana Rogers’ name to site hosts. (Ex. 125, PFF 22963). However, in a letter from James Q. Wilson to Mr. Eisenach dated September 28, 1993, Mr. Wilson wrote: Perhaps I don’t understand the purpose of the course, but if it is to be a course rather [than] a series of sermons, this chapter won’t do. It is bland, vague, hortatory, and lacking in substance. (emphasis in original) * * * * * * *. (Ex. 126, PFF 5994–5995). Also, in a book co-written by Larry Sabato, the following statements are made: In late 1992 and early 1993, Gingrich began conceiving a new way to advance those political goals—a nationally broadcast college course, ambitiously titled ‘‘Renewing American Civilization,’’ in which he would inculcate students with his Republican values. (p. 94). * * * * * * * Nominally an educational enterprise, internal course planning documents revealed the true nature of the course as a partisan organizing tool. (p. 95). Sabato, L. and Simpson, G., ‘‘Dirty Little Secrets: The Persistence of Corruption in American Politics,’’ Times Books (1996). 62 Near the end of his interview, Mr. Mescon expressed embarrassment in regard to his participation in the course. He became involved in the course in order to raise the profile of the school, but now believes that his efforts have had severe repercussions. (6/13/96 Mescon Tr. 136– 137). 54 use the college’s video production facilities. All funds for the course were raised by and expended by PFF under its tax-exempt status. The approximate expenditures for the course were $450,000 in 1994 and in $450,000 in 1995. At PFF this represented 63% of its program expenditures for its first fiscal year (which ended March 31, 1994) and 35% of its program expenditures for its second fiscal year (which ended March 31, 1995). 63 Reinhardt had a curriculum committee review the content of the course before deciding to have it presented on its campus. (6/12/96 Falany Tr. 15–16). The controversy over the course at KSC, however, affected the level of involvement Reinhardt was willing to assume in regard to the course. (6/12/96 Falany Tr. 44–48, 51–53, 59– 66; 6/12/96 Minnix Tr. 26–27). In this regard, Reinhardt’s administration saw a distinction between the ‘‘course’’ and a broader political ‘‘project.’’ As stated in a memorandum from Mr. Falany, Reinhardt’s President, to Mr. Eisenach dated November 11, 1993: First, there seems to be a ‘‘project’’, which is Renewing American Civilization, of which the ‘‘course’’ is a part. This distinction is blurred at times in the Project Overview. When you refer to the ‘‘project’’ it seems to imply a broader political objective (a non-welfare state). This is not to say that this political objective should be perceived as being negative, but it should, in fact, be seen as broader than and distinct from the simpler objective of the ‘‘course.’’ (Ex. 128, Reinhardt 0225).64 Because of this concern, Reinhardt administrators agreed to be involved only in the actual teaching of the course on its campus and would not participate in any other aspects of the project. (6/12/96 Falany Tr. 51–53, 59–66; 6/12/96 Minnix Tr. 26–27).65 In this regard, Mr. Falany made it clear to the faculty and staff at the college that: It is important to understand that, for the Winter Quarter 1994, the College will offer the course and teach it— that is the extent of our commitment. At the present time, the Progress and Freedom Foundation will handle all of the fund raising associated with the course; the distribution of tapes, text and materials; the broadcasting; and the handling of all information including the coordination of off-campus sites. (Ex. 129, Reinhardt 0265). 66 As was the case at KSC, Reinhardt administrators considered the course to be the forty hours of lecture by both Mr. Gingrich and Ms. Minnix. (6/12/96 Falany Tr. 74–76). Again, only Mr. Gingrich’s portion of the course was disseminated outside of Reinhardt. (6/12/ 96 Falany Tr. 53–54; 6/12/96 Minnix Tr. 48–49). Ms. Minnix had little contact with Mr. Gingrich, and no input into the content of 63 As of November 1996, PFF’s tax return (Form 990) for its third fiscal year (which ended March 31, 1996) had not been filed. 64 Reinhardt saw the ‘‘project’’ as essentially dealing with the dissemination of the course outside of Reinhardt’s campus. (6/12/96 Falany Tr. 48–50, 54–66, 84–85). 65 All of the funds for the course while at Reinhardt were raised by PFF under its tax exempt status. 66 Reinhardt College did rent its television production facilities to PFF for its use in the dissemination in the course, and was paid separately for this in the amount of $40,000. All production beyond that was handled by PFF. (6/12/96 Falany Tr. 27–28). 55 the course in 1994. In 1995 she had only limited input into the content of the course. (6/12/96 Minnix Tr. 20–22). Similarly, Mr. Gingrich and his associates provided no input as to Ms. Minnix’s portion of the course. (6/12/96 Minnix Tr. 31–32). While Mr. Falany did not know the purpose for disseminating the course, and made no inquiries in that regard (6/12/96 Falany Tr. 48–50; 54–66; 84–85), Ms. Minnix did have some knowledge in this area. Based on her contacts with the people associated with the course, she believed Mr. Gingrich had a global vision of getting American civilization back ‘‘on track’’ and that he wanted to shape the public perception through the course. (6/12/96 Minnix Tr. 59– 60). She felt there was an ‘‘evangelical side’’ to the course, which she described as an effort to have people get involved in politics, run for office, and try to influence legislation. (6/12/96 Minnix Tr. 70–71). Ms. Minnix felt uncomfortable with this ‘‘evangelical side.’’ (6/12/96 Minnix Tr. 70). Furthermore, as reflected in her memorandum of the November 1, 1994 meeting with Mr. Gingrich and others, she was aware that the course was to be used to let people know what Mr. Gingrich’s political agenda would be as Speaker. (6/ 12/96 Minnix Tr. 53–59; Ex. 92, Reinhardt 0064). As with KSC, one of the reasons Reinhardt administrators wanted to have the course taught on its campus was to raise profile of the school. (6/12/96 Falany Tr. 112–113). I. End of Renewing American Civilization Course Although Mr. Gingrich had intended to teach the course for four years, through the 1996 Winter quarter, he stopped teaching it after the 1995 Winter quarter. According to most of the witnesses interviewed on this subject, the reason for this was that he had run out of time in light of the fact that he had become Speaker. (7/12/ 96 Eisenach Tr. 280; 6/28/96 Hanser Tr. 52–53). On the other hand, Mr. Gingrich says that he had learned all he could from teaching the course and had nothing new to say on the topics. (7/18/96 Gingrich Tr. 364). Mr. Gingrich refused to support the efforts of PFF in regard to the course at that point, largely because he was disappointed with Mr. Eisenach’s financial management of the course. (7/18/96 Gingrich Tr. 365–366). Mr. Eisenach had indicated to Mr. Gingrich that the course was $250,000 in debt and that PFF had used its own resources to cover this shortfall. (Ex. 130, GDC 11325). Mr. Gingrich was skeptical of this claim, offered to have the records reviewed, and stated that he would help raise any amount that the review disclosed was needed. According to Mr. Gingrich, this offer was not pursued by Mr. Eisenach. (7/18/96 Gingrich Tr. 367–368). IV. ETHICS COMMITTEE APPROVAL OF COURSE On May 12, 1993, Mr. Gingrich wrote the Committee asking for ‘‘guidance on the development of an intellectual approach to new legislation that will be different from our normal activities.’’ (Ex. 131, p. 1). He said that he wanted ‘‘to make sure that [his] activities remain within a framework that meets the legitimate ethics concerns of the House.’’ (Ex. 131, p. 1). He went on to describe a 56 course he was planning to teach in the fall of 1993 at Kennesaw State College. The course would be based on his January 25, 1993 Special Order entitled ‘‘Renewing American Civilization.’’ (Ex. 131, p. 2). It would be ‘‘completely non-partisan’’ and, he hoped, would include ideas from many people, including politicians from both parties and academics. (Ex. 131, p. 2). He stated that he believed the development of ideas in the course was a ‘‘crucial part’’ of his job as a legislator. (Ex. 131, p. 3). He ended his letter with a request to the Committee to meet to discuss the project if the Committee had any concerns. (Ex. 131, p. 3). In June 1993, counsel for the Committee, David McCarthy, met with Mr. Gingrich, two people from his staff (Annette Thompson Meeks and Linda Nave) and Mr. Eisenach to discuss the course. (7/18/96 McCarthy Tr. 7; 7/10/96 Meeks Tr. 13). Mr. McCarthy’s initial concern was whether Mr. Gingrich could qualify for a teaching waiver under the House ethics rules. (7/18/96 McCarthy Tr. 16). When he learned Mr. Gingrich was teaching without compensation, the issue of a teaching waiver became, in his opinion, irrelevant. (7/18/96 McCarthy Tr. 16). Mr. McCarthy then asked questions regarding whether any official resources would be used to support the course and whether Mr. Gingrich planned to use any unofficial resources to subsidize his official business. Mr. McCarthy did not see any problems pertaining to these issues. Mr. Gingrich indicated that he might repeat the lectures from the course as Special Orders on the floor of the House. Mr. McCarthy suggested that Mr. Gingrich consult with the House Parliamentarian on that subject. (Ex. 132, p. 1). One issue raised with Mr. McCarthy was whether the House Ethics Rules permitted Mr. Gingrich to raise funds for a tax-exempt organization. Mr. McCarthy’s conclusion was that since KSCF was a qualified tax-exempt organization, Mr. Gingrich could raise funds for KSCF as long as he complied with the relevant House rules on the subject. (7/18/96 McCarthy Tr. 17). Mr. Eisenach raised the issue concerning the propriety of his being involved in fundraising for the course in light of the fact that he also worked for GOPAC. According to Mr. McCarthy, his response to the issue was as follows: [T]o my knowledge of tax law, the issue of whether the contributions in support of the course would keep their tax-deductible status would turn not on who did the fundraising but on how the funds were spent, and that the educational nature of the course spoke for itself. I told him that I was aware of no law or IRS regulation that would prevent Eisenach from raising charitable contributions, even at the same time that he was raising political contributions. In any event, I advised him, I expected the Committee to stick by its advisory opinion in the Ethics Manual and not get into second-guessing the IRS on its determinations of tax-exempt status. (Ex. 132, p. 2). Mr. McCarthy said in an interview that his statement regarding the Committee’s ‘‘stick[ing]’’ by its advisory opinion pertained only 57 to whether Mr. Gingrich could raise funds for the course. (7/18/96 McCarthy Tr. 19). The discussion did not relate to any other 501(c)(3) issues. (7/18/96 McCarthy Tr. 19). While Mr. McCarthy was aware that the course lectures would be taped and broadcast (7/18/96 McCarthy Tr. 16), neither Mr. Gingrich nor his staff asked for Mr. McCarthy’s advice regarding what activities in that regard were permissible under 501(c)(3) and Mr. McCarthy did not discuss such issues. (7/18/96 McCarthy Tr. 19; 7/18/96 Gingrich Tr. 375– 376; 7/10/96 Meeks Tr. 15). Mr. McCarthy did not recall any discussion regarding a Renewing American Civilization movement. (7/18/ 96 McCarthy Tr. 16). Mr. McCarthy did not recall any discussion of GOPAC’s use of the Renewing American Civilization message. (7/18/96 McCarthy Tr. 12–13). The discussion pertaining to Mr. Eisenach and GOPAC was brief. (Ex. 132, p. 2). During the meeting with Mr. McCarthy, there were no questions posed about 501(c)(3) or what could be done in regard to the course, aside from the fund-raising issue under 501(c)(3). (7/18/96 Gingrich Tr. 375–376). Mr. Gingrich did not believe that it was necessary to explain to Mr. McCarthy his intended use for the course. Mr. Cole: We are focusing, however, on your intended use of the course. And your intended use of the course here was in a partisan political fashion; is that correct? Mr. Gingrich: My intended use was, but I am not sure I had any obligation to explain that to the [C]ommittee. As long as the course itself was nonpartisan and the course itself was legal and the course itself met both accreditation and tax status, I don’t believe I had an obligation to tell the Ethics Committee what my political strategies were. I think that’s a retrospective comment. And maybe I am wrong. I don’t think—the questions were: Was it legal? Did I use official funds? Had we gotten approval? Was GOPAC’s involvement legitimate and legal? Was it an accredited course? Was I getting paid for it? I mean, none of those questions require that I explain a grand strategy, which would have seemed crazy in ’94. If I had wandered around and said to people, hi, we are going to win control, reshape things, end the welfare entitlement, form a grand alliance with Bill Clinton, who is also going to join us in renewing America, how would I have written that? (11/13/96 Gingrich Tr. 89–90). On July 21, 1993, Mr. Gingrich wrote the Committee to provide additional information about the course he planned to teach at KSC. The letter did not discuss how the course was to be funded or that there was a plan to distribute the course nationally via satellite, videotape, audiotape and cable, or that GOPAC’s main theme was to be ‘‘Renewing American Civilization.’’ The letter also did not discuss GOPAC’s role in the course. (Ex. 133).67 67 The information Mr. Gingrich provided to the Committee was that the Kennesaw State College Foundation, a 501(c)(3) organization affiliated with Kennesaw State College, was providing him with a ‘‘Content Coordinator to coordinate the videotape inserts and other materials that Continued 58 On August 3, 1993, the Committee, in a letter signed by Mr. McDermott and Mr. Grandy, responded to Mr. Gingrich’s letters of May 12, 1993 and July 21, 1993, regarding his request to the teach the course and his request to present the course materials in Special Orders. (Ex. 134, p. 1). The Committee’s letter also notes that Mr. Gingrich had asked if he could help KSC raise funds for the course. The Committee’s guidance was as follows: 1. Since Mr. Gingrich was teaching the course without compensation, he did not need the Committee’s approval to do so; 2. It was within Mr. Gingrich’s ‘‘official prerogative’’ to present the course materials in Special Orders; 3. Mr. Gingrich was permitted to raise funds for the course on behalf of charitable organizations, ‘‘provided that no official resources are used, no official endorsement is implied, and no direct personal benefit results.’’ (Ex. 134, p. 1). The Committee, however, advised Mr. Gingrich to consult with the FEC regarding whether election laws and regulations might pertain to his fundraising efforts. The Committee’s letter to Mr. Gingrich did not discuss any matters relating to the implications of 501(c)(3) on the teaching or dissemination of the course or GOPAC’s relationship to the course. (Ex. 134, p. 1). V. ‘‘private, 68 a more recent, significant case on the subject is the 1989 Tax Court opinion in American Campaign Academy v. Commiswill be used in the presentations.’’ (Ex. 133, pp. 1–2). He also wrote that none of his staff would perform tasks associated with the course and that the course material would not be based on previous work of his staff. (Ex. 133, p. 1). Finally, he wrote that much of the material from the course would be presented in Special Orders, although the presentations would have some differences. (Ex. 133, p. 2). 68 Better Business Bureau of Washington, D.C. v. United States, 326 U.S. 279 (1945). 59 sioner, (‘‘ACA’’ or ‘‘Academy’’). In his interview with Mr. Cole he said, ‘‘I). ‘‘I lived through that case. I mean, I was very well aware of what the [American Campaign Academy] did and what the ruling was.’’ (11/ 13/96 Gingrich Tr. 61). 69 Responding to the question of whether he had any involvement with the Academy, Mr. Gingrich said: ‘‘I think I actually taught that [sic], but that’s the only direct involvement I had.’’ (12/9/96 Gingrich Tr. 58). In an undated document on GOPAC stationery entitled ‘‘Offices ‘‘explosive? 69 His adviser, Mr. Gaylord, was a director of the Academy. (12/9/96 Gingrich Tr. 57; American Campaign Academy v. Commissioner, 92 T.C. 1053, 1056 (1989)). As referred to above, Mr. Gaylord was one of the ‘‘five key people’’ Mr. Gingrich relied on most. (Ex. 3, GDC 11551, GDC 11553). 60). 70 Mr. Gingrich explained to the Subcommittee in November 1996 that, in his opinion, there were no ‘‘parallels’’ 70 A document dated November 13, 1990, entitled Campaign For A Successful America, was reviewed by the Subcommittee. (Ex. 145, Eisenach 3086–3142). In a section drafted by Gordon Strauss, an attorney in Ohio, for a consulting group called the Eddie Mahe Company, the following is written: [S]ome educational organizations, tax exempt under Section 501(c)(3) of the Internal Revenue Code, have engaged in activities which affect the outcome of elections, though that is theoretically not supposed to occur. (Ex. 145, Eisenach 3132). The document also contains the following: A very controversial program is being undertaken by a (c)(3), indicating that it may have involvement in the electorial process, notwithstanding the express prohibition on it. At this time, a (c)(3) is not recommended because it would have to be truly independent of the (c)(4) and its PAC. (Ex. 145, Eisenach 3134). There was substantial inquiry about this document during the Preliminary Inquiry. No evidence was uncovered to indicate that Mr. Gingrich had any exposure to this document. (12/5/ 96 Mahe Tr. 34–35; 12/9/96 Gingrich Tr. 52–54; 12/5/96 Eisenach Tr. 59–61). Mr. Strauss was interviewed and stated that the document had nothing to do with AOW/ACTV, the 501(c)(3) organization referred to in the document was merely one he had heard of in an IRS Revenue Ruling, and that he never gave Mr. Gingrich any advice on the law pertaining section 501(c)(3) in regard to AOW/ACTV, the Renewing American Civilization course, or any other projects. The only legal advice he gave Mr. Gingrich pertained to need for care in the use of official resources for travel expenses. 61, 62 now, do you think I’ve got a problem? And I don’t think you did that. If you did, tell me you did. * * * (12/10/96 Gingrich Tr. 36–37). Mr. Gingrich’s response was threefold: 63 attorney where the entire relationship between the course, the movement, and political goals were fully set forth and found to be within the bounds of 501(c)(3). (11/14/96 Eisenach Tr. 88–91). VI. SUMMARY OF THE OF THE SUBCOMMITTEE’S EXPERT A. Introduction Because of differences of opinion among the Members of the Subcommittee regarding the tax issues raised in the Preliminary Inquiry, the Subcommittee determined that it would be helpful to obtain the views of a recognized expert in tax-exempt organizations law, particularly with respect to the ‘‘private benefit’’ prohibition. The expert, Celia Roady, reviewed Mr. Gingrich’s activities on behalf of ALOF and the activities of others on behalf of ALOF with Mr. Gingrich’s knowledge and approval. She also reviewed Mr. Gingrich’s activities on behalf of KSCF, PFF, and Reinhardt College in regard to the Renewing American Civilization course and the activities of others on behalf of those organizations with Mr. Gingrich’s knowledge and approval. The purpose of this review was to determine whether those activities violated the status of any of these organizations under section 501(c)(3) of the Internal Revenue Code. B. Qualifications of the Subcommittee’s Expert Ms. Roady is a partner in the Washington, D.C. office of the law firm Morgan, Lewis & Bockius LLP where she specializes full-time in the representation of tax-exempt organizations. Her practice involves the provision of advice on all aspects of section 501(c)(3). Ms. Roady has written many articles on tax-exempt organization issues for publication in legal periodicals such as the ‘‘Journal of Taxation of Exempt Organizations’’ and the ‘‘Exempt Organization Tax Review.’’ She is a frequent speaker on exempt organizations topics, regularly lecturing at national tax conferences such as the ALI/ ABA conference on charitable organizations and the Georgetown University Law Center conference on tax-exempt organizations, as well as at local tax conferences and seminars on tax-exempt organization issues. In 1996, she was named the Program Chair of the Georgetown University Law Center’s annual conference on tax-exempt organizations. (11/15/96 Roady Tr. 2–7). Ms. Roady is the immediate past Chair of the Exempt Organizations Committee of the Section of Taxation of the American Bar Association, having served as Chair from 1993 to 1995. She is currently serving a three-year term as a member of the Council of the ABA Section of Taxation, and is the Council Director for the Section’s Exempt Organizations Committee. She also serves on the Legal Section Council of the American Society of Association Executives, and is a Fellow of the American College of Tax Counsel. (11/15/96 Roady Tr. 2–7). Ms. Roady served a three-year term as the Co-Chair of the Exempt Organizations Committee of the District of Columbia Bar’s Tax Section from 1989 to 1991. She also served on the Steering Committee of the D.C. Bar’s Tax Section from 1989 to 1995, and 64 as Co-Chair of the Steering Committee from 1991 to 1993. (11/15/ 96 Roady Tr. 2–7). Each of the attorneys interviewed for the position of expert for the Subcommittee highly recommended Ms. Roady. She was described as being impartial and one of the leading people in the field of exempt organizations law. (11/15/96 Roady Tr. 2).71 Ms. Roady is a 1973 magna cum laude graduate of Duke University. She received her law degree from Duke Law School, with distinction, in 1976. She received a masters degree in taxation from the Georgetown University Law Center in 1979. C. Summary of the Expert’s Conclusions Ms. Roady considered the following issues in her review: 1. whether the content of the television programs broadcast by ALOF or the Renewing American Civilization course were ‘‘educational’’ within the meaning of section 501(c)(3); 2. whether one of the purposes of the activities with respect to the television programs or the course was to provide more than an incidental benefit to GOPAC, Mr. Gingrich, or other Republican entities and candidates in violation of the private benefit prohibition in section 501(c)(3); 3. whether the activities with respect to the television programs or the course provided support to GOPAC or a candidate for public office in violation of the campaign intervention prohibition in section 501(c)(3); 4. whether the activities with respect to the television programs or the course violated the private inurement prohibition in section 501(c)(3); and 5. whether the activities with respect to the television programs or the course violated the lobbying limitations applicable to section 501(c)(3) organizations. (11/15/96 Roady Tr. 7).72 With respect to the last two issues, Ms. Roady did not conclude that the activities with respect to the television programs or the course resulted in impermissible private inurement or violated the lobbying limitations applicable to section 501(c)(3) organizations. Similarly, with respect to the first issue, Ms. Roady concluded that the television programs and the course met the requirements of the methodology test described in Rev. Proc. 86–43 and were ‘‘educational’’ within the meaning of section 501(c)(3) even though they advocated particular viewpoints and positions. Accordingly, Ms. Roady concluded that the activities with respect to the television 71 The one known public comment on the matter by Ms. Roady is found in the following paragraph from a New York Times article: ‘‘Clearly, it’s an aggressive position,’’ said Celia Roady, a Washington lawyer and chairwoman of the American Bar Association’s committee on tax-exempt organizations, who stressed that she was not talking for the association. ‘‘Whether it’s too aggressive and crosses the line, I don’t know. Clearly, it’s more aggressive than many exempt organizations would go forward with.’’ New York Times, section A, page 12 (Feb. 20, 1995). (Ex. 144). In the same article, Mr. Gingrich is quoted as saying that he acted). 72 A detailed discussion of the law pertaining to organizations exempt from federal income tax under section 501(c)(3) of the Internal Revenue Code is attached as an Appendix to this Report. 65 programs and the course served an educational purpose and would be appropriate activities for section 501(c)(3) organizations, as long as there was no violation of the private benefit prohibition or the campaign intervention prohibition. She found substantial evidence, however, of violations of both such prohibitions and therefore concluded that Mr. Gingrich’s activities on behalf of the organizations and the activities of others on behalf of the organizations with Mr. Gingrich’s knowledge and approval violated the organizations’ status under section 501(c)(3). (11/15/96 Roady Tr. 7). The basis for her conclusions may be summarized briefly as follows: 1. THE AMERICAN CITIZENS TELEVISION PROGRAM OF ALOF 73 a. Private benefit prohibition Under section 501(c)(3) and the other legal authorities discussed above, the analysis of whether there is a violation of the private benefit prohibition does not depend on whether the activities at issue—the television programs—served an exempt purpose. Even though the television programs met the definition of ‘‘educational,’’ there is a violation of section 501(c)(3) if another purpose of the activities was to provide more than an insubstantial or incidental benefit to GOPAC or any other private party. As the Supreme Court stated in Better Business Bureau v. United States, 326 U.S. 276, 283 (1945), ‘‘the presence of a single noneducational purpose, if substantial in nature, will destroy the exemption regardless of the number or importance of truly educational purposes.’’ In making such a determination, the Tax Court has held that the proper focus is ‘‘the purpose towards which an organization’s activities are directed and not the nature of the activities themselves.’’ American Campaign Academy, 92 T.C. at 1078–79. The determination as to whether there is a violation of the private benefit prohibition cannot, therefore, be made solely by reference to the content of the television programs or whether the activities in relation to the programs served an educational purpose. Rather, the determination requires a factual analysis to determine whether the organization’s activities also had another, nonexempt purpose to provide more than an incidental benefit to a private party such as GOPAC or Republican entities and candidates. In this case, there is substantial evidence that these parties were intended to and did receive more than an incidental benefit from the activities conducted by ALOF. In summary, according to Ms. Roady, the evidence shows that the ACTV project was a continuation of GOPAC’s AOW project, and had the same partisan, political goals as AOW. These goals included, among other things, reaching ‘‘new groups of voters not traditionally associated with [the Republican] party;’’ ‘‘mobiliz[ing] thousands of people across the nation at the grass roots level [to become] dedicated GOPAC activists;’’ and ‘‘making great strides in continuing to recruit activists all across America to become involved with the Republican party.’’ The persons who conducted the 73 After Ms. Roady met with the Subcommittee to discuss the tax-exempt organizations law and her conclusions regarding Renewing American Civilization, she met with the Special Counsel to discuss the ACTV project. Although she did not formally present her conclusions to the Subcommittee, the legal principles she explained during her meetings with the Subcommittee with respect to Renewing American Civilization were equally applicable to the facts surrounding the ACTV project and support her conclusions set forth in this section of the Report. 66 ACTV project on behalf of ALOF were GOPAC officers, employees, or consultants. In essence, the transfer of the AOW project from GOPAC to ALOF was more in name than substance, since the same activities were conducted by the same persons in the same manner with the same goals. Through the use of ALOF, however, these persons were able to raise tax-deductible charitable contributions to support the ACTV project, funding that would not have been available to GOPAC on a tax-deductible basis. Taken together, according to Ms. Roady, the facts as described above show that in addition to its educational purpose, another purpose of the ACTV project was to benefit GOPAC and, through it, Republican entities and candidates, by continuing to conduct the AOW project under a new name and through a section 501(c)(3) organization that could raise funding for the project through tax-deductible charitable contributions. This benefit was not merely incidental. To the contrary, the evidence supports a finding that one of the main purposes for transferring the project to ALOF was to make possible the continuation of activities that substantially benefited GOPAC and Republican entities and candidates. For these reasons, Ms. Roady concluded that one of the purposes of Mr. Gingrich’s activities on behalf of ALOF and the activities of others on behalf of ALOF with Mr. Gingrich’s knowledge and approval was to provide more than an incidental benefit to GOPAC and Republican entities and candidates in violation of the private benefit prohibition. b. Campaign intervention prohibition As with respect to the private benefit prohibition, the legal authorities discussed above make it clear, according to Ms. Roady, that the analysis of whether there is a violation of the campaign intervention prohibition does not turn on whether the television programs had a legitimate educational purpose. In the IRS CPE Manual, the IRS explained that ‘‘activities that meet the [educational] methodology test * * * may nevertheless constitute participation or intervention in a political campaign.’’ IRS CPE Manual at 415. See also New York Bar, 858 F.2d 876 (2d Cir. 1988); Rev. Proc. 86–43. Nor does the analysis turn on the fact that the television programs did not expressly urge viewers to ‘‘support GOPAC,’’ ‘‘vote Republican,’’ or ‘‘vote for Mr. Gingrich.’’ The IRS does not follow the express advocacy standard applied by the FEC, and it is not necessary to advocate the election or defeat of a clearly identified candidate to violate the campaign intervention prohibition. IRS CPE Manual at 413. The determination as to whether there is a violation of the campaign intervention prohibition requires an overall ‘‘facts and circumstances’’ analysis that cannot be made solely by reference to the content of the television programs. The central issue is whether the television programs provided support to GOPAC. When Congress enacted section 527 in 1974, the legislative history explained that the provision was not intended to affect the prohibition against electioneering activity contained in section 501(c)(3). The IRS regulations under section 527 provide that section 501(c)(3) organizations are not permitted to establish or support a PAC. Treas. Reg. § 1.527–6(g). Under the applicable legal standards, there is a violation of the campaign interven- 67 tion prohibition with respect to ALOF if the evidence shows that the ACTV project provided support to GOPAC, even though the television programs were educational and were not used as a means to expressly advocate the election or defeat of a particular candidate. According to Ms. Roady, there is substantial evidence of such support in this case. As discussed above, the evidence shows that the ACTV project conducted by ALOF was a continuation of AOW, a partisan, political project undertaken by GOPAC. Mr. Gingrich himself described ACTV as a continuation of the AOW project. The activities conducted by ALOF with respect to the ACTV project were the same as the activities that had been conducted by GOPAC with respect to the AOW project. The persons who conducted the ACTV project on behalf of ALOF were GOPAC officers, employees, or consultants. Shifting the project to ALOF allowed the parties to raise some tax-deductible charitable contributions to conduct what amounted to the continuation of a GOPAC project for partisan, political purposes. For these reasons, Ms. Roady concluded that Mr. Gingrich’s activities on behalf of ALOF and the activities of others on behalf of ALOF with Mr. Gingrich’s knowledge and approval provided support to GOPAC in violation of the campaign intervention prohibition. 2. THE RENEWING AMERICAN CIVILIZATION COURSE a. Private benefit prohibition The determination of whether there is a violation of the private benefit prohibition does not depend on whether the teaching and dissemination of the course served an educational purpose, and cannot be made simply by analyzing the content of Mr. Gingrich’s lectures. The course met the definition of ‘‘educational’’ under section 501(c)(3) and served an educational purpose. (11/15/96 Roady Tr. 7). Nevertheless, there is a violation of section 501(c)(3) if another purpose of the course was to provide more than an incidental private benefit. (11/15/96 Roady Tr. 17). Making this determination requires an analysis of the facts to find out whether Mr. Gingrich’s activities on behalf of KSCF, PFF, and Reinhardt and the activities of others with his knowledge and approval had another nonexempt purpose to provide more than an incidental benefit to private parties such as Mr. Gingrich, GOPAC, and other Republican entities and candidates. In this case, there is substantial evidence that these parties were intended to and did receive more than an incidental benefit from the activities conducted with respect to the course. (11/15/96 Roady Tr. 78, 123, 124, 130, 131, 142–145, 173, 195). In summary, according to Ms. Roady, the evidence shows that the course was developed by Mr. Gingrich in the context of a broader movement. (11/15/96 Roady Tr. 127–130, 134–135, 196). This movement was intended to have political consequences that would benefit Mr. Gingrich in his re-election efforts, GOPAC in its national political efforts, and Republican party entities and candidates in seeking to attain a Republican majority. The goals of the movement were expressed in various ways, and included arousing 200,000 activists interested in renewing American civilization by 68 replacing the welfare state with an opportunity society and having the Republican party adopt the message of Renewing American Civilization so as to attract those activists to the party. It was intended that a Republican majority would be part of the movement, and that the Republican party would be identified with the ‘‘opportunity society’’ and the Democratic party with the ‘‘welfare state.’’ (11/15/96 Roady Tr. 128, 130, 142, 145–148, 217–218; 11/19/96 Roady Tr. 35, 41). The movement, the message of the movement, and the course were all called ‘‘Renewing American Civilization.’’ Mr. Gingrich’s lectures in the course were based on the same principles as the message of the movement, and the course was an important vehicle for disseminating the message of the movement. Mr. Gingrich stated that the course was ‘‘clearly the primary and dominant method [of disseminating the message of the movement.]’’ Mr. Gingrich used the Renewing American Civilization message in almost every political and campaign speech he made in 1993 and 1994. He was instrumental in determining that virtually the entire political program for GOPAC for 1993 and 1994 would be centered on developing, disseminating, and using the message of Renewing American Civilization. (11/15/96 Roady Tr. 125–127, 144–145, 148–149, 153, 177, 218). Although GOPAC’s financial resources were not sufficient to enable it to carry out all of the political programs at its usual level during this period, it had many roles in regard to the course. These roles included development of the course content which was coordinated in advance with GOPAC charter members, fundraising for the course on behalf of the section 501(c)(3) organizations, and promotion of the course. GOPAC envisioned a partisan, political role for the course. (11/15/96 Roady Tr. 197–202, 208–209). From 1993 to 1995, KSCF and PFF spent most of the money they had raised for the course on the dissemination of the 20 hours taught by Mr. Gingrich. These funds were raised primarily through tax-deductible charitable contributions to KSCF and to PFF,74 funding that would not have been available had the project been conducted by GOPAC or another political or noncharitable organization. According to Ms. Roady, the facts as set forth above show that, although the Renewing American Civilization course served an educational purpose, it had another purpose as well. (11/19/96 Roady Tr. 37, 40). The other purpose was to provide a means for developing and disseminating the message of Renewing American Civilization by replacing the welfare state with an opportunity society. That was the main message of GOPAC and the main message of virtually every political and campaign speech made by Mr. Gingrich in 1993 and 1994. Through the efforts of Mr. Gingrich and others acting with his knowledge and approval, tax-deductible charitable contributions were raised to support the dissemination of a course in furtherance of Mr. Gingrich’s political strategies. (11/19/ 96 Roady Tr. 37, 38). Mr. Gingrich encouraged GOPAC, House Republicans and other Republican entities and candidates to use the 74 Some funding came from the sale of videotapes and audiotapes of the course. (7/12/96 Eisenach Tr. 283). 69 course in their political strategies as well. (11/15/96 Roady Tr. 145, 152, 173). The partisan, political benefit to these parties was intended from the outset, and this benefit cannot be considered merely incidental. To the contrary, the evidence supports a finding that one of Mr. Gingrich’s main purposes for teaching the course was to develop and disseminate the ideas, language, and concepts of Renewing American Civilization as an integral part of a broad movement intended to have political consequences that would benefit him in his re-election efforts, GOPAC in its political efforts, and other Republican entities and candidates in seeking to attain a Republican majority. For these reasons, Ms. Roady concluded that one of the purposes of Mr. Gingrich’s activities on behalf of KSCF, PFF and Reinhardt in regard to the course entitled ‘‘Renewing American Civilization’’ and the activities of others on behalf of those organizations with Mr. Gingrich’s knowledge and approval was to provide more than an incidental benefit to Mr. Gingrich, GOPAC, and other Republican entities and candidates in violation of the private benefit prohibition. (11/15/96 Roady Tr. 122, 125, 127, 143–145, 148, 152, 153, 187–189, 213–217). b. Campaign intervention prohibition As discussed above, neither the fact that the content of the Renewing American Civilization course is educational within the meaning of section 501(c)(3) nor the fact that the course lectures do not contain expressions of support or opposition for a particular candidate precludes a finding that there is a violation of the campaign intervention prohibition. Section 501(c)(3) organizations are prohibited from establishing or supporting PACs, and from providing support to candidates in their campaign activities. The relevant issue is whether the course provided support to GOPAC or to Mr. Gingrich in his capacity as a candidate. According to Ms. Roady, there is substantial evidence of such support in this case. As discussed above, the evidence shows that the course was developed by Mr. Gingrich as a part of a broader political movement to renew American civilization by replacing the welfare state with an opportunity society. The course was an important vehicle for disseminating the message of that movement. The message of replacing the welfare state with the opportunity society was also used in a partisan, political fashion. The ‘‘welfare state’’ was associated with Democrats and the ‘‘opportunity society’’ was associated with Republicans. The message of the course was also the main message of GOPAC during 1993 and 1994 and the main message of virtually every political and campaign speech made by Mr. Gingrich in 1993 and 1994. Through the use of section 501(c)(3) organizations, Mr. Gingrich and others acting with his knowledge and approval raised tax-deductible charitable contributions which were used to support a course designed, developed and disseminated in a manner that provided support to GOPAC in its political programs and to Mr. Gingrich in his re-election campaign. For these reasons, Ms. Roady concluded that Mr. Gingrich’s activities on behalf of KSCF, PFF and Reinhardt and the activities of others on behalf of those organizations with Mr. Gingrich’s knowledge and approval provided support to GOPAC and to Mr. Gingrich 70 in violation of the campaign intervention prohibition. (11/15/96 Roady Tr. 171–175, 194). D. Advice Ms. Roady Would Have Given Had Mr. Gingrich or others associated with ACTV or Renewing American Civilization consulted with Ms. Roady prior to conducting these activities under the sponsorship of 501(c)(3) organizations, she would have advised that they not do so for the reasons set forth above. During her testimony before the Subcommittee, she was asked what her advice would have been to Mr. Gingrich and others associated with ACTV and Renewing American Civilization. She said that she would have recommended the use of a 501(c)(4) organization to pay for the dissemination of the course, as long as the dissemination was not the primary activity of the 501(c)(4) organization. If this had been done, contributions for ACTV and the course would not have been tax-deductible. (11/15/96 Roady Tr. 207–208). VII. SUMMARY OF CONCLUSIONS OF MR. GINGRICH’S TAX COUNSEL A. Introduction During the Preliminary Inquiry, Mr. Gingrich’s lawyer forwarded to the Subcommittee a legal opinion letter and follow-on letter regarding the tax questions at issue. The letters were prepared by attorney James P. Holden. At Mr. Gingrich’s request, Mr. Holden and his partner who helped him prepare the letters, Susan Serling, met with the Subcommittee on December 12, 1996, to discuss his conclusions. The purpose of the letters was to express Mr. Holden’s conclusions regarding whether any violation of section 501(c)(3) occurred with respect to the Renewing American Civilization course. His understanding of the facts of the matter was based on a review of the course book prepared for the course, videotapes of the course, documents produced by KSC pursuant the Georgia Opens Records Act, PFF’s application to the IRS for exemption, newspaper articles, discussions with Mr. Baran, Mr. Eisenach, and counsel to PFF and KSCF.75 B. Qualifications of Mr. Gingrich’s Tax Counsel Mr. Holden is a partner at the Washington, D.C. law firm of Steptoe and Johnson. He was an adjunct professor at Georgetown University Law Center from 1970 to 1983. He is co-author of ‘‘Ethical Problems in Federal Tax Practice’’ and ‘‘Standards of Tax Practice.’’ He is the author of numerous tax publications and a speaker at numerous tax institutes. He was chair of the American Bar Association Section of Taxation from 1989 to 1990; chair of the Advisory Group to the Commissioner of Internal Revenue from 1992 to 1993; and chair of the IRS Commissioner’s Review Panel on Integ75 Mr. Holden and his partner conferred with Mr. Eisenach for about three hours. (12/12/96 Holden Tr. 38). The conversation with KSCF counsel, via telephone, lasted about 30 minutes. (12/12/96 Holden Tr. 39). The conversation with PFF’s counsel lasted about two hours. (12/12/ 96 Holden Tr. 38–39). Mr. Holden did not talk to Mr. Gingrich prior to writing the opinion. (12/ 12/96 Holden Tr. 43). He also did not talk to anyone else involved in the course, such as Mr. Hanser, Ms. Rogers, Ms. Nelson, Mr. Mescon, or Ms. Minnix. (12/12/96 Holden Tr. 43–44). 71 rity Controls from 1989 to 1990. He was a trustee and president of the American Tax Policy Institute from 1993 to 1995 and a regent of the American College of Tax Counsel. He is or was a member of the following organizations: American Law Institute (consultant, Federal Income Tax Project); Advisory Group to Senate Finance Committee Staff regarding Subchapter C revisions (1984– 1985); Board of Advisors, New York University/Internal Revenue Service Continuing Professional Education Program (1987–1990); and BNA Tax Management Advisory Board. He received a J.D. degree from Georgetown University Law Center in 1960 and a B.S. degree from the University of Colorado in 1953. His experience in 501(c)(3) law stems principally from one client and one case that has been before the IRS for the past six years. (12/12/96 Holden Tr. 21).76 He said during his testimony, ‘‘I don’t pretend today to be a specialist in exempt organizations. * * * I pretend to be an expert in the political aspects of such organizations.’’ (12/12/96 Holden Tr. 21). The one case Mr. Holden worked on has not been resolved and he has spent, on average, about 30 percent of his time for the last six years on this case. (12/12/96 Holden Tr. 24). He has never been a member of any organization or committee concerned principally with tax-exempt organizations law. (12/12/96 Holden Tr. 25). He does not have any publications in the exempt organizations field. (12/12/96 Holden Tr. 25). He has never given any speeches on exempt organizations law nor has he been an expert witness with respect to exempt organizations law. (12/12/96 Holden Tr. 26). When Mr. Baran asked Mr. Holden to prepare his opinion letter, Mr. Baran did not ask what qualifications Mr. Holden had in the exempt organizations area. (12/12/96 Holden Tr. 32). Mr. Holden did not give Mr. Baran any information regarding his background in exempt organizations law other than the names of two references. (12/12/96 Holden Tr. 33). Mr. Holden’s partner who helped prepared the opinion, Susan Serling, does not have experience in the exempt organizations field other than with respect to the one case referred to above that is still before the IRS. (12/12/96 Holden Tr. 27). She is not a member of the ABA Exempt Organizations Committee and does not have any publications in the exempt organizations field. She has never given any speeches pertaining to exempt organizations law and has never testified as an expert witness with respect to exempt organizations law. (12/12/96 Holden Tr. 27). C. Summary of Conclusions of Mr. Gingrich’s Tax Counsel As set forth in Mr. Holden’s opinion letter, his follow-on letter, and in his testimony, it was Mr. Holden’s opinion, based on his review of the facts available to him, that ‘‘there would be no violation of section 501(c)(3) if an organization described in that section were to conduct ‘Renewing American Civilization’ as its primary activity.’’ (9/6/96 Holden Ltr. 4). In arriving at this opinion, Mr. Holden evaluated the facts in light of the requirements: 76 Although Mr. Holden declined to identify the client in this case, he said that the case ‘‘is perhaps the largest case the Internal Revenue Service has before it on this whole issue.’’ (12/ 12/96 Holden Tr. 20–21). 72 1. that a section 501(c)(3) organization be operated exclusively for an exempt purpose; 2. that the organization serve a public rather than a private interest; 3. that the earnings of an organization not inure to the benefit of any person; 4. that no substantial part of the activities of the organization consist of attempting to influence legislation; and 5. that the organization not participate or intervene in any political campaign in support of or in opposition to any candidate for public office. (9/6/96 Holden Ltr. 4). A discussion of Mr. Holden’s views on the two principal tax questions at issue before the Subcommittee—the private benefit prohibition and campaign intervention prohibition— is set forth below. 1. PRIVATE BENEFIT PROHIBITION With respect to whether Renewing American Civilization violated the private benefit prohibition described above, Mr. Holden’s opinion and follow-on letter focused exclusively on the American Campaign Academy case. His letters did not refer to other precedent or IRS statements pertaining to the private benefit prohibition. In evaluating whether Renewing American Civilization created any discernible secondary benefit, in the terms used by the Court in American Campaign Academy, Mr. Holden considered whether the course provided an ‘‘identifiable benefit’’ to GOPAC or the Republican party. He concluded that it did not. Following our review of the course materials, the course syllabi, and video tapes of the course lectures, we have not been able to identify any situation in which students of the course were advised to vote Republican, join the Republican party, join GOPAC, or support Republicans in general. Rather, the course explored broad aspects of American civilization through Mr. Gingrich’s admittedly partisan viewpoint. (9/17/96 Holden Ltr. 5). Mr. Holden also wrote: From our review of the course materials * * * and their presentation, it appears to us that the educational message was not narrowly targeted to benefit particular organizations or persons beyond the students themselves. (9/6/96 Holden Ltr. 58). During his testimony before the Subcommittee, Mr. Holden said that because the course was educational within the meaning of the ‘‘methodology test’’ referred to above, he could not ‘‘conceive’’ of how the broad dissemination of its message could violate 501(c)(3). (12/12/96 Holden Tr. 71). Now, when we get into the course—and I am saying I am going to look at the activities, and if I have a clean educational message, then my organization is entitled to disseminate that message as broadly as we have the resources to do [for any purpose as long as it is] serving the 73 public with that in the sense that this message has utility to the public. (12/12/96 Holden Tr. 113–114). In coming to his conclusion that the course did not violate the private benefit prohibition, Mr. Holden made several findings of fact and several assumptions. For example, he wrote that he considered the facts that established a close connection between individuals who were active in GOPAC and the development and promotion of the course. As he characterized it, GOPAC’s former Executive Director and GOPAC employees became employees or contractors to the organizations that conducted the course. Individuals, foundations, and corporations that provided financial support for the course were also contributors to GOPAC or Mr. Gingrich’s political campaigns. GOPAC employees solicited contributions for the course. (9/6/96 Holden Ltr. 4). Furthermore, documents he reviewed: provide[d] evidence that the course was developed in a political atmosphere and as part of a larger political strategy. The documents indicate that Mr. Gingrich and GOPAC evolved a political theme that they denominated ‘‘Renewing American Civilization’’ and that, in their political campaign capacities, they intended to press this theme to the advantage of Republican candidates. (9/17/96 Holden Ltr. 2). Mr. Holden assumed a political motivation behind the development of the course. As described in his opinion letter: [T]he individuals who controlled GOPAC and who participated in promoting the course viewed the course as desirable in a political context, and many of their expressions and comments evidence a political motive and interest. * * * Mr. Gingrich is a skilled politician whose ideology finds expression in a political message, and he is interested in maximum exposure of that message and in generating interest in those who might be expected to become advocates of the message. In sum, we have not assumed that the development and promotion of the course were free from political motivation. (9/6/96 Holden Ltr. 4–5). Furthermore, Mr. Holden said that when preparing his opinion, he made the ‘‘critical assumption that the interests of the political persona surrounding GOPAC were advanced by creating this course.’’ (12/12/96 Holden Tr. 72). In this regard, Mr. Holden also said during his testimony: We have taken as an assumption that the intent [of the course] was to benefit the political message. If someone told me that teaching the course actually resulted in the benefit, I guess I wouldn’t be surprised because that was our understanding of the objective. * * * I accept[ed] for purposes of our opinion that there was an intent to advance the political message by utilizing a (c)(3). (12/12/96 Holden Tr. 83). 74 In Mr. Holden’s opinion, however, the political motivation or strategy behind the creation of the course is irrelevant when determining whether a violation of the private benefit prohibition occurred. It is not the presence of politicians or political ideas that controls. The pertinent law does not turn on the political affiliations or political motivations of the principal participants. (9/6/96 Holden Ltr. 6). According to Mr. Holden, the issue of whether a violation of 501(c)(3) occurred ‘‘may not be resolved by a determination that the individuals who designed and promoted the course acted with political motivation.’’ (9/17/96 Holden Ltr. 4). In his opinion, when determining whether an organization violated the private benefit prohibition, it is necessary to determine whether an organization’s activities in fact served a private interest. (12/ 12/96 Holden Tr. 80). What motivates the activities is irrelevant. I’m saying it’s irrelevant to look to what caused an individual or group of individuals to form a (c)(3) or to utilize a 501(c)(3) organization. The question instead is on the activities—the focus instead is on the activities of the organization and whether they violated the operational test. I think that’s a critical distinction. (12/12/96 Holden Tr. 61). He said that he was ‘‘aware of no authority that would hold that because one is motivated to establish a 501(c)(3) organization by business, political, or other motivation, that means that the organization cannot operate in a manner that satisfies 501(c)(3), because we are talking about an operational test.’’ (12/12/96 Holden Tr. 17–18). Mr. Holden cited American Campaign Academy as an authority for his conclusion that an organization’s activity must itself benefit a targeted group and that motivation of an organization’s agents in conducting that activity is irrelevant. Mr. Holden said: [In American Campaign Academy] [t]he focus was, instead, on the operational test and whether the activities of the organization evidenced a purpose to serve a private interest. But you have to find that in the activities of the organization and not in some general notion of motivation or background purpose. (12/12/96 Holden Tr. 61). In light of these and similar comments made by Mr. Holden, the Special Counsel asked Mr. Holden to comment on statements found in the American Campaign Academy case at page 1064. The statements are in a section of the case under the heading ‘‘Operational Test’’ and are as follows: The operational test examines the actual purpose for the organization’s activities and not the nature of the activities or the organization’s statement of purpose. (citations omitted). (emphasis supplied). In testing compliance with the operational test, we look beyond the four corners of the organization’s charter to discover ‘‘the actual objects motivating the organization 75 and the subsequent conduct of the organization.’’ (citations omitted). (emphasis supplied). What an organization’s purposes are and what purposes its activities support are questions of fact. (citations omitted). (12/12/96 Holden Tr. 75–76). After the Special Counsel brought these sections of the case to Mr. Holden’s attention, the following exchange occurred: Mr. Holden: May I refer you to the last sentence before the next heading, ‘‘Operating Primarily for Exempt Purposes.’’ The last sentence before that says: ‘‘The sole issue for declaration [sic] is whether respondent properly determined that petitioner failed to satisfy the first condition of the operational test by not primarily engaging in activities, which is not for exempt purposes.’’ It’s an activities test. And this is where the courts say this is the sole issue. The stuff before, they’re just kind of reciting the law. When he gets to this, he said this is what we have to determine. Mr. Cole: But in reciting the law, don’t they say, in testing compliance with the operational test, we look beyond the four corners of the organization’s charter to discover the actual objects motivating the organization? Prior to that, they say the operational test examines the actual purpose for the organization’s activities, not the nature of the activities or the organization’s statement of purpose. I grant you that is the statement of the law, but you are saying that has no significance? Mr. Holden: That’s not the case Judge Nims decided. * * * (12/12/96 Holden Tr. 77). 2. CAMPAIGN INTERVENTION PROHIBITION In his opinion letter, Mr. Holden wrote that it was ‘‘important to note that section 501(c)(3) does not, as is often suggested, bar ‘political activity’ [by 501(c)(3) organization].’’ (9/6/96 Holden Ltr. 68). The prohibition is more limited and prohibits an organization from participating in or intervening in any political campaign on behalf of or in opposition to any candidate for public office. In order for an organization to violate this prohibition, there must exist a campaign, a candidate, a candidate seeking public office, and an organization that participates or intervenes on behalf of or in opposition to that candidate. (9/6/96 Holden Ltr. 68–69). Mr. Holden concluded that the course did not violate this prohibition. The [course] materials contain no endorsement of or opposition to the candidacy of any person, whether expressed by name or through the use of a label that might be taken as a stand-in for a candidate. While the materials are critical of what is referred to as the ‘‘welfare state’’ and laudatory of what is described as an ‘‘opportunity society,’’ none of this is properly characterized as personalized to candidates, directly or indirectly. 76 (9/6/96 Holden Ltr. 72). During his testimony before the Subcommittee, Mr. Holden said that the course contained issue advocacy in the sense that it called for the replacement of the welfare state with the opportunity society. (12/12/96 Holden Tr. 103–104). He also said that this issue—the replacement of the welfare state with an opportunity society—was closely identified with Mr. Gingrich and his political campaigns. (12/12/96 Holden Tr. 104). He, however, did not see this as a basis for concluding that the course violated the prohibition on intervention in a political campaign because ‘‘Mr. Gingrich [had not] captured [this issue] to the point where it is not a legitimate public interest issue for discussion in a purely educational setting, even where he is the instructor.’’ (12/ 12/96 Holden Tr. 104).77 D. Advice Mr. Holden Would Have Given During his appearance before the Subcommittee, Mr. Holden was asked about what type of organization he would have advised Mr. Gingrich and others to use in order to conduct and disseminate Renewing American Civilization had he been asked in advance. He said that he would not have advised the use of a 501(c)(3) organization because the mix of politics and tax-deductible funds is too ‘‘explosive.’’ I would have advised them not to do the activity through a (c)(3). I have already expressed that view to the Speaker. He didn’t consult me in advance, but I said, if I had been advising you in advance. He said, why not. I said, because the intersection of political activity and 501(c)(3) is such an explosive mix in terms of the IRS view of things that I would not advise you to move that close to the issue. You should find a way of financing the course that doesn’t involve the use of 501(c)(3) funds. That would have been my advice to him. I said, that doesn’t mean I conclude that what you did is a violation. In fact, I think we are kind of fairly far out beyond the frontiers of what has been decided in the past in this area. We are looking at the kind of case that I do not think has ever been presented. I do not see how anyone can conclude that this is an open and shut case. It just is not of that character. (12/12/96 Holden Tr. 132–134). Mr. Holden said that an appropriate vehicle for the course might have been a 501(c)(4) organization because such an organization can engage in some political activity and the activity would not have used tax-deductible funds. 77 See also 12/12/96 Holden Tr. 103: Mr. Schiff: But if you are providing 501(c)(3) raised money to pay for that candidate to give the same message, which is his political message, I think, for all substantial purposes, aren’t you then, in effect, intervening or even endorsing the candidate by using that type of money to allow him to get his message further than it would get in the absence of that money? Mr. Holden: I go back to the fact that we have a clean curriculum that we were talking about in a hypothetical and in the judgment that we reached about this case, and I don’t believe that merely because a political figure takes a particular set of values and articulates them as a political theme, that that so captures that set of values that a 501(c)(3) organization cannot legitimately educate people about that same set of values. Mr. Schiff: With the same messenger? Mr. Holden: It doesn’t seem to me that that compels a conclusion that there’s a violation of 501(c)(3). 77 (12/12/96 Holden Tr. 132–134). Later, Mr. Holden re-iterated that he would have not recommended that Renewing American Civilization be sponsored and funded by a 501(c)(3) organization and pointed out such activities are highly likely to attract the attention of the IRS. [T]hose funds are deductible and the conjunction of politics and a (c)(3) organization is so explosive as a mix that it is bound to attract the attention of the Internal Revenue Service. I wouldn’t have been thinking about this committee. I would have been thinking about whether the Internal Revenue Service would have been likely to challenge. (12/12/96 Holden Tr. 146). After Mr. Holden made this comment, the following exchange occurred: Ms. Pelosi: So it would have raised questions[?] Mr. Holden: Yes. Mr. Goss: Isn’t that a little bit akin to having a yacht and an airplane on your tax return for business purposes[?] Mr. Holden: It is one of those things that stands out. (12/12/96 Holden Tr. 146–147). VIII. SUMMARY OF FACTS PERTAINING TO STATEMENTS MADE THE COMMITTEE A. Background TO On or about September 7, 1994, Ben Jones, Mr. Gingrich’s Democratic opponent in 1994, filed with the Committee a complaint against Mr. Gingrich. The complaint centered on the course., and was not a permissible activity for a section 501(c)(3) organization. (Ex. 135). On or about October 4, 1994, Mr. Gingrich wrote the Committee in response to the complaint and primarily addressed the issues concerning the use of congressional staff for the course. In doing so he stated: I would like to make it abundantly clear that those who were paid for course preparation were paid by either the Kennesaw State Foundation, [sic] the Progress and Freedom Foundation or GOPAC. * * * Those persons paid by one of the aforementioned groups include: Dr. Jeffrey Eisenach, Mike DuGally, Jana Rogers, Patty Stechschultez [sic], Pamela Prochnow, Dr. Steve Hanser, Joe Gaylord and Nancy Desmond. (Ex. 136, p. 2). After the Committee received and reviewed Mr. Gingrich’s October 4, 1994 letter, it sent him a letter dated October 31, 1994, asking for additional information concerning the allegations of misuse 78 of tax-exempt organizations in regard to the course. The Committee also asked for information relating to the involvement of GOPAC in various aspects of the course. As set forth in the letter, the Committee wrote: There is, however, an allegation which requires explanation before the Committee can finalize its evaluation of the complaint. This is the allegation that, in seeking and obtaining funding for your course on Renewing American Civilization, you improperly used tax-exempt foundations to obtain taxpayer subsidization of political activity. * * * * * * * Your answers to [questions set forth in the letter] would be helpful to the Committee in deciding what formal action to take with respect to the complaint. * * * * * * * A number of documents submitted by Ben Jones, however, raise questions as to whether the course was in fact exclusively educational in nature, or instead constituted partisan political activity intended to benefit Republican candidates. (Ex. 137, pp. 1–2). B. Statements Made by Mr. Gingrich to the Committee, Directly or Through Counsel 1. MR. GINGRICH’S DECEMBER 8, 1994 LETTER TO THE COMMITTEE In a letter dated December 8, 1994, Mr. Gingrich responded to the Committee’s October 31, 1994 letter. (Ex. 138). In that letter, Mr. Gingrich made the following statements, which he has admitted were inaccurate, incomplete, and unreliable. 1. [The course] was, by design and application, completely non-partisan. It was and remains about ideas, not politics. (Ex. 138, p. 2). 2. The idea to teach ‘). 3. The fact is, ‘‘Renewing American Civilization’’ and GOPAC have never had any official relationship. (Ex. 138, p. 4). 4. GOPAC * * * is a political organization whose interests are not directly advanced by this non-partisan educational endeavor. (Ex. 138, p. 5). 5. As a political action committee, GOPAC never participated in the administration of ‘‘Renewing American Civilization.’’ (Ex. 138, p. 4). 6. Where employees of GOPAC simultaneously assisted the project, they did so as private, civic-minded individuals contributing time and effort to a 501(c)(3) organization. (Ex. 138, p. 4). 79 7. Anticipating media or political attempts to link the Course to [GOPAC], ‘‘Renewing American Civilization’’ organizers went out of their way to avoid even the appearances of improper association with GOPAC. Before we had raised the first dollar or sent out the first brochure, Course Project Director Jeff Eisenach resigned his position at GOPAC. (Ex. 138, p. 4). The goal of the letter was to have the complaint dismissed. (11/13/96 Gingrich Tr. 36). 2. MARCH 27, 1995 LETTER OF MR. GINGRICH’S ATTORNEY TO THE COMMITTEE On January 26, 1995, Representative Bonior filed with the Committee an amended version of the Ben Jones complaint against Mr. Gingrich. (Ex. 139). Among other things, the complaint re-alleged that the Renewing American Civilization course had partisan, political purposes and was in violation of section 501(c)(3). The complaint also alleged substantial involvement of GOPAC in the course. (Ex. 139, pp. 1–7). In a letter dated March 27, 1995, Mr. Baran, Mr. Gingrich’s attorney and a partner at the law firm of Wiley, Rein and Fielding, filed a response on behalf of Mr. Gingrich to the amended complaint. (Ex. 140, PFF 4347). Prior to the letter being delivered, Mr. Gingrich reviewed it and approved its submission to the Committee. (7/18/96 Gingrich Tr. 274–275). Mr. Cole: If there was anything inaccurate in the letter, would you have told Mr. Baran to change it? Mr. Gingrich: Absolutely. (7/18/96 Gingrich Tr. 275). The letter contains the following statements, which Mr. Gingrich has admitted were inaccurate, incomplete, and unreliable. 1. As Ex. 13 demonstrates, the course solicitation * * * materials are completely non-partisan. (Ex. 140, p. 19, fn. 7). 2. GOPAC did not become involved in the Speaker’s academic affairs because it is a political organization whose interests are not advanced by this non-partisan educational endeavor. (Ex. 140, p. 35). 3. The Renewing American Civilization course and GOPAC have never had any relationship, official or otherwise. (Ex. 140, p. 35). 4. As noted previously, GOPAC has had absolutely no role in funding, promoting, or administering Renewing American Civilization. (Ex. 140, pp. 34–35). 5. GOPAC has not been involved in course fundraising and has never contributed any money or services to the course. (Ex. 140, p. 28). 6. Anticipating media or political attempts to link the course to GOPAC, course organizers went out of their way to avoid even the appearance of associating with GOPAC. Prior to becoming Course Project Director, Jeffrey Eisenach resigned his position at GOPAC and has not returned. (Ex. 140, p. 36). 80 The purpose of Mr. Baran’s letter was to have the Committee dismiss the complaints against Mr. Gingrich. (11/13/96 Gingrich Tr. 35–36). C. Subcommittee’s Inquiry Into Statements Made to the Committee On September 26, 1996, the Subcommittee expanded the scope of the Preliminary Inquiry to determine: [w]hether * * *. On October 1, 1996, the Subcommittee requested that Mr. Gingrich produce to the Subcommittee all documents that were used or relied upon to prepare the letters at issue—the letters dated October 4, 1994, December 8, 1994 and March 27, 1995. Mr. Gingrich responded to the Committee’s request on October 31, 1996. (Ex. 141). In his response, Mr. Gingrich described how extremely busy he was at the time the October 4, 1994, and December 8, 1994 letters were prepared. He said, the October 4, 1994 letter was written ‘‘in [the] context of exhaustion and focused effort’’ on finishing a congressional session, traveling to over a hundred congressional districts, tending to his duties as Whip, and running for re-election in his district. (Ex. 141, p. 1). At the time of the December 8, 1994 letter, he said that he and his staff were ‘‘making literally hundreds of decisions’’ as part of the transition in the House from Democratic to Republican Control. (Ex. 141, p. 2; 11/13/96 Gingrich Tr. 6, 10, 26). With respect to his level of activity at the time the March 27, 1995 letter was created Mr. Gingrich said the following: [W]e were going through passing the Contract with America in a record 100 days in what many people believe was a forced march. I was, in parallel, beginning to lay out the base for the balanced budget by 2002, and I was, frankly, being too noisy publicly and damaging myself in the process. I had three projects—four; I was writing a book. So those four projects were ongoing as I was going home to report to my district, and we were being battered as part of this continuum by Bonior and others, and we wanted it handled in a professional, calm manner. We wanted to honor the Ethics process. (11/13/96 Gingrich Tr. 33–34). Mr. Gingrich wrote in his October 31, 1996 response to the Subcommittee that ‘‘although [he] did not prepare any of the letters in question, in each case [he] reviewed the documents for accuracy.’’ (Ex. 141, p. 3). Specifically, with respect to the October 4, 1994 letter, his assistant, Annette Thompson Meeks, showed him the draft she had created and he ‘‘read it, found it accurate to the best of [his] knowledge, and signed it.’’ (Ex. 141, p. 2). With respect to the December 8, 1994 letter, he wrote, ‘‘Again I would have read the 81 letter carefully and concluded that it was accurate to the best of my knowledge and then signed it.’’ (Ex. 141, p. 2). With respect to the March 27, 1995 letter, he wrote that he ‘‘read [it] to ensure that it was consistent with [his] recollection of events at that time.’’ (Ex. 141, p. 3). D. Creation of the December 8, 1994 and March 27, 1995 Letters Mr. Gingrich appeared before the Subcommittee on November 13, 1996 to testify about these letters.78 He began his testimony by stating that the ‘‘ethics process is very important.’’ (11/13/96 Gingrich Tr. 4). He then went on to state: On Monday I reviewed the 380-page [July 1996] interview with Mr. Cole, and I just want to begin by saying to the [C]ommittee that I am very embarrassed to report that I have concluded that reasonable people could conclude, looking at all the data, that the letters are not fully responsive, and, in fact, I think do fail to meet the standard of accurate, reliable and complete. (11/13/96 Gingrich Tr. 5). Mr. Gingrich said several times that it was only on the Monday before his testimony—the day when he reviewed the transcript of his July interview with Mr. Cole—that he realized the letters were inaccurate, incomplete, and unreliable. (11/13/96 Gingrich Tr. 5, 8, 10, 149, 150, 195; 12/10/96 Gingrich Tr. 75). In his testimony before the Subcommittee the next month, Mr. Gingrich ‘‘apologized for what was clearly a failure to communicate accurately and completely with this [C]ommittee.’’ (12/10/96 Gingrich Tr. 5). Mr. Gingrich said the errors were a result of ‘‘a failure to communicate involving my legal counsel, my staff and me.’’ (12/10/96 Gingrich Tr. 5). Mr. Gingrich went on to say: After reviewing my testimony, my counsel’s testimony, and the testimony of two of his associates, the ball appears to have been dropped between my staff and my counsel regarding the investigation and verification of the responses submitted to the [C]ommittee. As I testified, I erroneously, it turns out, relied on others to verify the accuracy of the statements and responses. This did not happen. As my counsel’s testimony indicates, there was no detailed discussion with me regarding the submissions before they were sent to the [C]ommittee. Nonetheless, I bear responsibility for them, and I again apologize to the [C]ommittee for what was an inadvertent and embarrassing breakdown. * * * * * * * At no time did I intend to mislead the [C]ommittee or in any way be less than forthright. (12/10/96 Gingrich Tr. 5–7). Of all the people involved in drafting, reviewing, or submitting the letters, the only person who had first78 Mr. Gingrich appeared twice before the Subcommittee to discuss these letters. The first time was on November 13, 1996, in response to a request from the Subcommittee that he appear and testify about the matter under oath. The second time was on December 10, 1996, as part of his opportunity to address the Subcommittee pursuant to Rule 17(a)(3) of the Committee’s Rules. Pursuant to Committee Rules, that appearance was also under oath. 82 hand knowledge of the facts contained within them with respect to the Renewing American Civilization course was Mr. Gingrich. 1. CREATION OF THE DECEMBER 8, 1994 LETTER According to Mr. Gingrich, after he received the Committee’s October 31, 1994 letter, he decided that the issues in the letter were too complex to be handled by his office and he sought the assistance of an attorney. (11/13/96 Gingrich Tr. 11). Mr. Gaylord, on behalf of Mr. Gingrich, contacted Jan Baran and the Mr. Baran’s firm began representing Mr. Gingrich on November 15, 1994. (11/14/96 Gaylord Tr. 16; 79 11/13/96 Baran Tr. 4; 80 12/10/96 Gingrich Tr. 5). The response prepared by Mr. Baran’s firm became the letter from Mr. Gingrich to the Committee dated December 8, 1994. According to Mr. Baran, he did not receive any indication from Mr. Gaylord or Mr. Gingrich that Mr. Baran was to do any kind of factual review in order to prepare the response. (11/13/96 Baran Tr. 47–48). 81 Mr. Baran and his staff did not seek or review documents other than those attached to the complaint of Mr. Jones and the Committee’s October 31, 1994 letter to Mr. Gingrich 82 and did not contact GOPAC, Kennesaw State College, or Reinhardt College. (11/13/96 Baran Tr. 13, 15, 18). Mr. Baran did not recall speaking to Mr. Gingrich about the letter other than possibly over dinner on December 9, 1994—one day after the letter was signed by Mr. Gingrich. (11/13/96 Baran Tr. 18, 33). Mr. Baran did contact Mr. Eisenach, but did not recall the ‘‘nature of the contact.’’ (11/13/96 Baran Tr. 16). Mr. Eisenach said he had no record of ever having spoken to Mr. Baran about the letter and does not believe that he did so. (11/14/96 Eisenach Tr. 18–19, 22). The conversation he had with Mr. Baran concerned matters unrelated to the letter. (11/14/96 Eisenach Tr. 17–18). Mr. Eisenach also said that no one has ever given him a copy of the December 8, 1994 letter and asked him to verify its contents. (11/14/96 Eisenach Tr. 22). The other attorney at Wiley, Rein and Fielding involved in preparing the response was Bruce Mehlman. (11/13/96 Baran Tr. 19; 11/19/96 Mehlman Tr. 17). He was a first-year associate who had been at Wiley, Rein and Fielding since September 1994. (11/19/96 Mehlman Tr. 5). 83 Mr. Mehlman’s role was to create the first draft. (11/19/96 Mehlman Tr. 15). The materials Mr. Mehlman had available to him to prepare the draft were: 1. correspondence between Mr. Gingrich and the Committee, including the October 4, 1994 letter; 2. course videotapes; 79 Mr. Gaylord was the one to contact the firm because his position was ‘‘advisor to Congressman Gingrich’’ and he coordinated ‘‘all of the activities that were outside the official purview of [Mr. Gingrich’s] congressional responsibilities.’’ (11/14/96 Gaylord Tr. 19; 11/13/96 Baran Tr. 7). 80 Mr. Gingrich waived his attorney/client privilege and asked Mr. Baran to testify before the Committee. (11/13/96 Gingrich Tr. 5). 81 Mr. Gaylord said that he did not give any instructions to Mr. Baran about how the response should be prepared. (11/14/96 Gaylord Tr. 16–17). Mr. Baran, however, recalled that Mr. Gaylord said that the response should be completed quickly ‘‘because there was hope that the Ethics Committee would meet before the end of the year to consider this matter’’ and that it should not be too expensive. (11/13/96 Baran Tr. 7, 46–48). 82 The attachments to the October 31, 1994 letter were selected from materials that were part of the complaint filed by Mr. Jones. 83 Mr. Mehlman left Wiley, Rein & Fielding in February 1996 and is now an attorney with the National Republican Congressional Committee. (11/19/96 Mehlman Tr. 5). 83 3. the book used in the course called ‘‘Renewing American Civilization’’; 4. a course brochure; 5. the complaint filed by Ben Jones against Mr. Gingrich; and 6. documents produced pursuant to a Georgia Open Records Act request. (11/19/96 Mehlman Tr. 15–16, 20). Mr. Mehlman said that he did not attempt to gather any other documents because he did not see a need to go beyond these materials in order to prepare a response. (11/19/96 Mehlman Tr. 19–20). With the exception of contacting his brother, who had taken the course,84 Mr. Mehlman did not make any inquiries of people regarding the facts of the matter. (11/19/96 Mehlman Tr. 18). He did not, for example, contact GOPAC or Mr. Eisenach. (11/19/96 Mehlman Tr. 28). After he completed his first draft, he gave it to Mr. Baran. (11/19/96 Mehlman Tr. 22). He assumed that Mr. Baran would make sure that any factual questions would have been answered to his satisfaction before the letter went out. (11/19/96 Mehlman Tr. 51). However, Mr. Mehlman did not know what, if anything, Mr. Baran did with the draft after he gave it to him. (11/19/96 Mehlman Tr. 22). When Mr. Gaylord asked Mr. Baran to prepare the letter, it was Mr. Baran’s understanding that Annette Thompson Meeks, an Administrative Assistant for Mr. Gingrich’s office, would help. (11/13/ 96 Baran Tr. 5, 7). According to Mr. Baran, Ms. Meeks’ role was: basically to take a draft product from us and review it for accuracy [from] her personal knowledge and basically make sure that it was acceptable. And in that regard, I believed that she may have spoken with other people to confirm that, but you will be talking to her, and you will have to confirm it with her. I tried to not talk to her about that. (11/13/96 Baran Tr. 10). Mr. Baran described the process for reviewing the letter as follows: Well, you know, as a counsel who was retained relatively late in that process at that time and as someone who had no firsthand knowledge about any of the underlying activities and with a marching order of trying to prepare a draft that was usable by the staff, we were pretty much focused on getting something together and over to Annette Meeks so that it could be used. Verification was something that would have been available through those who had firsthand knowledge about these facts, who had reviewed the draft. (11/13/96 Baran Tr. 15). Mr. Baran did not, however, know whether the letter was reviewed by others to determine its accuracy. (11/13/ 96 Baran Tr. 48). 84 The information obtained from his brother used as the basis of the statement in Mr. Gingrich’s response that the course contained ‘‘as many references to Franklin Roosevelt, Jimmy Carter, and Martin Luther King, Jr. as there are to Ronald Reagan or Margaret Thatcher.’’ (11/ 19/96 Mehlman Tr. 20). Mr. Mehlman, however, personally reviewed only one course videotape. (11/19/96 Mehlman Tr. 21). 84 Ms. Meeks said that at the time the letter was being prepared, she had no knowledge of whether: 1. the course was a political or partisan activity by design or application; 2. GOPAC was involved in the course; 3. GOPAC was benefited by the course; 4. GOPAC created, funded, or administered the course; 5. the idea to teach the course arose wholly independent of GOPAC; 6. Mr. Gingrich’s motivation for teaching the course arose not as a politician but rather as a historian; 7. Mr. Eisenach resigned his position at GOPAC. (11/14/96 Meeks Tr. 45–47). Ms. Meeks also said she was unaware that GOPAC’s theme was Renewing American Civilization. (11/14/ 96 Meeks Tr. 88). Ms. Meeks said she had no role in drafting the letter, did not talk to anyone to verify that the facts in the letter were accurate, and had no knowledge of how the facts in the letter were checked for accuracy. (11/14/96 Meeks Tr. 39, 48, 51). She did not indicate to Mr. Baran that she had given the letter to anyone for the purpose of checking its accuracy. (11/14/96 Meeks Tr. 87). In this regard, Ms. Meeks said: I will be very frank and tell you I don’t know how [Mr. Baran] composed this information as far as who he spoke with. I was not privy to any of that. The only thing I could add to my answer is that once counsel is retained, we were kind of out of the picture as far as the process, other than typing and transmitting. (11/14/96 Meeks Tr. 92). She said her role was to provide Mr. Baran with: background information about Mr. McCarthy (the Committee’s counsel who had conferred with Mr. Gingrich about the course in 1993); a copy of the October 4, 1994 letter from Mr. Gingrich to the Committee; copies of papers relating to Mr. Hanser’s employment with Mr. Gingrich’s congressional office; and copies of the course videotapes. (11/14/96 Meeks Tr. 36–37). Mr. Gaylord had a similar expectation in that, by retaining Wiley, Rein and Fielding, the firm was: both protecting us and had done the proper and correct investigation in the preparation of the letters and that they, in fact, did their job because that’s what they were paid to do. And I presumed that they had extracted the information from Dr. Eisenach and others who were involved specifically in the course. (11/14/96 Gaylord Tr. 62). Mr. Gaylord, however, did not know what inquiry Mr. Baran made in order to prepare the letter. (11/ 14/96 Gaylord Tr. 17). After Mr. Baran sent Ms. Meeks a draft of the letter, Ms. Meeks re-typed the letter and sent the new version to Mr. Baran to verify that it was identical to what he had sent her. She then recalled faxing a copy to Mr. Gaylord and to Mr. Gingrich’s executive assistant ‘‘to get Newt to take a look at it.’’ (11/14/96 Meeks Tr. 43–44). Mr. Gingrich said about his review of the letter: 85 And I think in my head, I was presented a document— I am not trying to blame anybody, or I am not trying to avoid this, I am trying to explain how it happened. I was presented a document and told, this is what we have collectively decided is an accurate statement of fact. I read the document, and it did not at any point leap out to me and say, boy, you had better modify paragraph 3, or that this phrase is too strong and too definitive. I think I read it one time, so that seems right to me, and I signed it. (11/13/96 Gingrich Tr. 11). See also 11/13/96 Gingrich Tr. 10 (at the time he read the letter, ‘‘nothing leaped out at [him] and said, ‘this is wrong’ ’’) and 11/13/96 Gingrich Tr. 16 (the letter ‘‘seemed accurate’’ to him).85 Mr. Gaylord did not recall whether he reviewed the letter prior to its being sent to the Committee. (11/14/96 Gaylord Tr. 18). Mr. Gaylord said that the statement that GOPAC had no role in the administration of the course was incorrect. (11/14/96 Gaylord Tr. 30–31). Mr. Gaylord said that the statement that GOPAC employees contributed time as private, civic-minded people was incorrect. (11/14/96 Gaylord Tr. 31). Mr. Gaylord was not asked to verify the facts in the letters. (11/14/96 Gaylord Tr. 20, 33). 2. BASES FOR STATEMENTS IN THE DECEMBER 8, 1994 LETTER During their testimony, those involved in the creation of the letter were unable to explain the bases for many of the statements in the letter. Explanations were, however, given for the bases of some of the statements. A summary of those bases is set forth below. 1. [The course] was, by design and application, completely non-partisan. It was and remains about ideas, not politics. (Ex. 138, p. 2). Mr. Baran said that the basis for this statement was his review of the course tapes and course materials. (11/13/96 Baran Tr. 19). Mr. Mehlman said the following about his understanding of the basis of this statement: Well, I don’t specifically recall. If I had to assume, it would be some of the [Georgia Open Records Act] documents or some of the course materials that purport to be nonpartisan, or to have created a course that was nonpartisan, that certainly would explain design. 85 In early July 1993, Mr. Gingrich was interviewed about the course by a student reporter with the KSC newspaper. In that interview the following exchange took place: Interviewer: And how is GOPAC involved in this? Mr. Gingrich: It’s not involved in this at all. Interviewer: Are you going to bring a lot of your ideas to GOPAC though? Mr. Gingrich: Absolutely. Every single one of them. (Ex. 142, p. 10). In other interviews over the past few years, Mr. Gingrich has made other statements about GOPAC’s involvement in the course. They have included, for example, the following: 1.‘‘GOPAC had the most incidental involvement at the very beginning of the process.’’ (Atlanta Constitution, section A, page 1 (Sept. 19, 1993)). 2.‘‘GOPAC provided some initial ideas on who might be interested in financing the course; that’s all they did.’’ (Associated Press, AM cycle, (Sept. 2, 1993)). 3.‘‘The initial work was done before we talked with Kennesaw State College at GOPAC in organizing our thoughts.’’ (The Hotline, American Political Network, Inc. (Sept. 7, 1993)). 86 As far as in application, probably the reference made by my brother who had seen the course, who had participated in it, I suppose, and my general basic review of the initial writings about the course and viewing the first videotape of the course, suggested that the course was nonpartisan. (11/19/96 Mehlman Tr. 24–25). According to Mr. Baran, the letter to the College Republicans— which was one of the attachments to the September 7, 1994 Jones complaint (Ex. 81)—did not raise a question in his mind that the course was partisan or about politics. (11/13/96 Baran Tr. 23). 2. ‘‘The idea to teach ). Mr. Baran said that the basis of this statement was a review of the course tapes and the belief that the course had originated from a January 25, 1993 speech Mr. Gingrich had given on the House floor. (11/13/96 Baran Tr. 24–25). At the time the letter was drafted, Mr. Baran was unaware of Mr. Gingrich’s December 1992 meeting with Owen Roberts where Mr. Gingrich first laid out his ideas for the Renewing American Civilization movement and course. (11/ 13/96 Baran Tr. 25). Mr. Mehlman did not speak with Mr. Gingrich about his motivations for the course and did not know if Mr. Baran had spoken with Mr. Gingrich about his motivations for teaching the course. (11/19/96 Mehlman Tr. 27). 3. ‘‘The fact is, ‘Renewing American Civilization’ and GOPAC have never had any official relationship.’’ (Ex. 38, p. 4). Mr. Baran said about this statement: Well, I think the basis of [this] statement[] [was] essentially the characterizations that had been placed on the relationship between the course and GOPAC by people like Jeff Eisenach 86 at that time, and it was consistent with my limited knowledge of GOPAC’s association with the course at that time. . . . You know, the various materials, some of which we went through this morning, were items that came to my attention in the course of the document production, which commenced, I think, around April of this year and took quite a bit of time, or that came up in the course of your interviews with Mr. Gingrich. * * * * * * * 86 Earlier in his testimony and as described above, Mr. Baran said that he had contacted Mr. Eisenach at the time the letter was being prepared, but did not recall the ‘‘nature of the contact.’’ (11/13/96 Baran Tr. 16). As also discussed above, Mr. Eisenach recalled having a discussion with Mr. Baran at the time the letter was being prepared, but about topics unrelated to the letter. (11/14/96 Eisenach Tr. 17–18). 87 Well, I think the basis is that these statements were being reviewed by people who would presumably be in a position to correct me if there [sic] was wrong. (11/13/96 Baran Tr. 36–37). When asked about the appearance of GOPAC fax cover sheets on documents pertaining to the course, Mr. Baran said that such faxes raised questions in his mind but that he ‘‘had an understanding at that time that those questions were addressed by an explanation that there were either incidental or inadvertent uses of GOPAC resources or there were uses of GOPAC resources that were accounted for by Mr. Eisenach.’’ (11/13/96 Baran Tr. 21). Mr. Baran could not recall how he came to this understanding. (11/13/96 Baran Tr. 21–22). With respect to whether Mr. Baran knew that GOPAC was involved in raising funds for the course, Mr. Baran said: At that time my recollection of quote, GOPAC being involved in fund-raising [unquote] was focused on Ms. Prochnow, the finance director who I don’t know and have never met, but whose role was characterized, I believe, by Jeff Eisenach to me at some point, as having helped raise a couple of contributions, I think, Cracker Barrel was one of them, that is a name that sticks in my mind. But it was characterized as being sort of ancillary and just really not material. (11/13/96 Baran Tr. 41). 3. CREATION OF THE MARCH 27, 1995 LETTER In addition to the associate, Mr. Mehlman, who had worked with Mr. Baran in drafting Mr. Gingrich’s December 8, 1994 letter to the Committee, another associate, Michael Toner, helped Mr. Baran draft what became the March 27, 1995 letter.87 (11/19/96 Toner Tr. 10–11). As with the December 8, 1994 letter, Mr. Baran did not receive any indication from Mr. Gaylord or Mr. Gingrich that Mr. Baran was to do any kind of factual review in order to prepare the March 27, 1995 letter. (11/13/96 Baran Tr. 48). Mr. Baran did not recall contacting anyone outside the law firm for facts relevant to the preparation of the letter with respect to the course. He said that ‘‘the facts about the course, frankly, didn’t seem to have changed any from the December period to the March period. And our focus seemed to be elsewhere.’’ (11/13/96 Baran Tr. 28). Both Mr. Mehlman and Mr. Toner said that they did not contact anyone with knowledge of the facts at issue in order to prepare the letter. (11/19/96 Toner Tr. 21–22, 38; 11/19/96 Mehlman Tr. 38). Ms. Meeks said that she had no role in the preparation of the letter. (11/14/96 Meeks Tr. 50). She saw it for the first time one day prior to her testimony before the Subcommittee in November 1996. (11/14/96 Meeks Tr. 50). Mr. Eisenach said that he did not have any role in the preparation of the letter nor was he asked to review 87 Mr. Toner has been an associate attorney with Wiley, Rein and Fielding since September 1992, except for a period during which we he worked with the Dole/Kemp campaign. (11/19/96 Toner Tr. 6). 88 it prior to its submission to the Committee. (11/14/96 Eisenach Tr. 24–25). Mr. Gaylord said that he had no role in the preparation of the letter and did not provide any information that is in the letter. (11/14/96 Gaylord Tr. 20). He also said that he did not discuss the letter with Mr. Gingrich or Mr. Baran at the time of its preparation. (11/14/96 Gaylord Tr. 21). Mr. Gaylord said that he did not know where Baran obtained the facts for the letter. He ‘‘presumed’’ that Mr. Baran and his associates had gathered the facts. (11/14/96 Gaylord Tr. 21–22). Mr. Baran said that his role in creating the letter was to meet with Mr. Mehlman and Mr. Toner, review the status of their research and drafting and review their drafts. (11/13/96 Baran Tr. 28). Mr. Mehlman and Mr. Toner divided responsibility for drafting portions of the letter. (11/19/96 Toner Tr. 12–14; 11/19/96 Mehlman Tr. 36, 37, 40). Mr. Baran also made edits to the letter. (11/19/96 Mehlman Tr. 40). During his interview, Mr. Toner stressed that there were many edits to the letter by Mr. Baran, Mr. Mehlman, and himself and he could, therefore, not explain who had drafted particular sentences in the letter. (see, e.g, 11/19/96 Toner Tr. 34). After the letter was drafted, Mr. Baran said that Mr. Baran and his associates then ‘‘would have sent a draft that they felt comfortable with over to the Speaker’s office.’’ (11/13/96 Baran Tr. 28). Mr. Baran, Mr. Toner, and Mr. Mehlman each said during their testimony that they assumed that Mr. Gingrich or someone in his office reviewed the letter for accuracy before it was submitted to the Committee. (11/19/96 Toner Tr. 16, 40, 44; 11/13/96 Baran Tr. 32–33, 37–38; Mehlman Tr. 41). They, however, did not know whether Mr. Gingrich or anyone in his office with knowledge of the facts at issue ever actually reviewed the letter prior to its submission to the Committee. (11/19/96 Toner Tr. 17, 40, 44; 11/13/96 Baran Tr. 37–38; Mehlman Tr. 41). With respect to Mr. Baran’s understanding of whether Mr. Gingrich reviewed the letter, the following exchange occurred: Mr. Cole: Did you have any discussions with Mr. Gingrich concerning this letter prior to it going to the committee? Mr. Baran: I don’t recall any. I just wanted to make sure that he did review it before it was submitted. Mr. Cole: How did you determine that he had reviewed it? Mr. Baran: I don’t recall today, but I would not file anything until I had been assured by somebody that he had read it. Mr. Cole: Would that assurance also have involved him reading it and not objecting to any of the facts that are asserted in the letter? Mr. Baran: I don’t know what his review process was regarding this letter. * * * * * * * Mr. Cole: If he just read it, you may still be awaiting comments from him. Would you have made sure that he had read it and approved it, or just the fact that he read 89 it is all you would have been interested in, trying to make sure that we don’t blur that distinction? Mr. Baran: No, I would have wanted him to be comfortable with this on many levels. Mr. Cole: And were you satisfied that he was comfortable with it prior to filing it with the committee? Mr. Baran: Yes. Mr. Cole: Do you know how you were satisfied? Mr. Baran: I can’t recall the basis upon which that happened. (11/13/96 Baran Tr. 32–33). 4. BASES FOR STATEMENTS IN THE MARCH 27, 1995 LETTER With respect to the bases for the statements in the letter in general, Mr. Baran said that it was largely based on the December 8, 1994 letter and any information he and his associates relied on to prepare it. (11/13/96 Baran Tr. 37–38). IX. ANALYSIS AND CONCLUSION A. Tax Issues In reviewing the evidence concerning both the AOW/ACTV project and the Renewing American Civilization project, certain patterns became apparent. In both instances, GOPAC had initiated the use of the messages as part of its political program to build a Republican majority in Congress. In both instances there was an effort to have the material appear to be non-partisan on its face, yet serve as a partisan, political message for the purpose of building the Republican Party. Under the ‘‘methodology. 90 non-partisan in its content, and even though he assumed that the motivation for disseminating it involved partisan, political goals, he did not find a sufficiently narrow targeting of the dissemination to conclude that it was a private benefit to anyone. Some Members of the Subcommittee and the Special Counsel agreed with Ms. Roady and concluded that there was a clear violation of 501(c)(3) with respect to AOW/ACTV and Renewing American Civilization. Other Members of the Subcommittee were troubled by reaching this conclusion and believed that the facts of this case presented a unique situation that had not previously been addressed by the legal authorities. As such, they did not feel comfortable supplanting the functions of the Internal Revenue Service or the Tax Court in rendering a ruling on what they believed to be an unsettled area of the law. B. Statements Made to the Committee The letters Mr. Gingrich submitted to the Committee concerning the Renewing American Civilization complaint were very troubling to the Subcommittee. They contained definitive statements about facts that went to the heart of the issues placed before the Committee. In the case of the December ensure that the letters were accurate; however, none of Mr. Gingrich’s staff had sufficient knowledge to be able to verify the accuracy of 91ober Violation Based on the information described above, the Special Counsel proposed a Statement of Alleged Violations (‘‘SAV’’) to the Subcommittee on December 12, 1996. The SAV contained three counts: (1) Mr. Gingrich’s activities on behalf of ALOF in regard to AOW/ ACTV, and the activities of others in that regard with his knowledge and approval, constituted a violation of ALOF. 92 1. DELIBERATIONS ON THE TAX COUNTS There was a difference of opinion regarding whether to issue the SAV ensure they were done in accord with the provisions of 501(c)(3). In particular, the Subcommittee was concerned with the fact that: (1) Mr. Gingrich had been ‘‘very well aware’’ of the American Campaign Academy case prior to embarking on these projects; (2) he had been involved with 501(c)(3) organizations to a sufficient degree to know that politics and taxdeductible contributions are, as his tax counsel said, an ‘‘explosive mix;’’ (3) he was clearly involved in a project that had significant partisan, political goals, and he had taken an aggressive approach to the tax laws in regard to both A ensure CONCERNING THE LETTERS The Subcommittee’s deliberation concerning the letters provided to the Committee centered on the question of whether Mr. Gingrich 93 SAV. All felt that this standard had been met in regard to the allegation that Mr. Gingrich ‘‘knew’’ that the information he provided to the Committee was inaccurate. However, there was considerable discussion to the effect that if Mr. Gingrich wanted to admit to submitting information to the Committee that he ‘‘should, ‘‘should have known’’ was an appropriate framing of the charge in light of all the facts and circumstances. 3. DISCUSSIONS WITH MR. GINGRICH’S COUNSEL AND RECOMMENDED SANCTION On December 13, 1996, the Subcommittee issued an SAV SAV, 94. Under the Rules of the Committee, a reprimand is the appropriate sanction for a serious violation of House Rules and a censure is appropriate for a more serious violation of House Rules. Rule 20(g), Rules of the Committee on Standards of Official Conduct. It was the opinion of the Subcommittee that this matter fell somewhere in between. Accordingly, the Subcommittee and the Special Counsel recommend that the appropriate sanction should be a reprimand and a payment reimbursing the House for some of the costs of the investigation in the amount of $300,000. Mr. Gingrich has agreed that this is the appropriate sanction in this matter. Beginning on December 15, 1996, Mr. Gingrich’s counsel and the Special Counsel began discussions directed toward resolving the matter without a disciplinary hearing. The discussions lasted through December 20, 1996. At that time an understanding was reached by both Mr. Gingrich and the Subcommittee concerning this matter. That understanding was put on the record on December 95,88 origi88 These changes included the removal of the word ‘‘knew’’ from the original Count 3, making the charge read that Mr. Gingrich ‘‘should have known’’ the information was inaccurate. 96 nal Statement of Alleged Violations and allowing Mr. Gingrich an opportunity to withdraw his answer. And I should note that it is the intention of the subcommittee that ‘‘public comments’’ refers to press statements; that, obviously, we are free and Mr. Gingrich is free to have private conversations with Members of Congress about these matters.89 After the Subcommittee voted to issue the substitute SAV,-December. X. SUMMARY OF FACTS PERTAINING TO USE RESOURCES OF UNOFFICIAL The Subcommittee investigated allegations that Mr. Gingrich had improperly utilized the services of Jane Fortson, an employee of the Progress in Freedom Foundation (‘‘PFF’’), in violation of House Rule 45, which prohibits the use of unofficial resources for official purposes. Ms. Fortson was an investment banker and chair of the Atlanta Housing Project who had experience in urban and housing issues. In January 1995 she moved to Washington, D.C., from Atlanta to work on urban and housing issues as a part-time PFF Senior Fellow and subsequently became a full-time PFF Senior Fellow in April, 1995. The Subcommittee determined that Mr. Gingrich sought Ms. Fortson’s advice on urban and housing issues on an ongoing and meaningful basis. During an interview with Mr. Cole, Mr. Gingrich stated that although he believed he lacked the authority to give 89 It was also agreed that in the private conversations Mr. Gingrich was not to disclose the terms of the agreement with the Subcommittee. 97 Ms. Fortson assignments, he often requested her assistance in connection with urban issues in general and issues pertaining to the District of Columbia in particular. The investigation further revealed that Ms. Fortson appeared to have had unusual access to Mr. Gingrich’s official schedule and may have occasionally influenced his official staff in establishing his official schedule. In her capacity as an unofficial policy advisor to Mr. Gingrich, Ms. Fortson provided ongoing advice to Mr. Gingrich and members of Mr. Gingrich’s staff to assist Mr. Gingrich in conducting official duties related to urban issues. Ms. Fortson frequently attended meetings with respect to the D.C. Task Force during which she met with Members of Congress, officials of the District of Columbia, and members of their staffs. Although Mr. Gingrich and principal members of his staff advised the Subcommittee that they perceived Ms. Fortson’s assistance as limited to providing information on an informal basis, the Subcommittee discovered other occurrences which suggested that Mr. Gingrich and members of his staff specifically solicited Ms. Fortson’s views and assistance with respect to official matters. The Subcommittee acknowledges that Members may properly solicit information from outside individuals and organizations, including nonprofit and for-profit organizations. Regardless of whether auxiliary services are accepted from a nonprofit or for-profit organization, Members must exercise caution to limit the use of outside resources to ensure that the duties of official staff are not improperly supplanted or supplemented. The Subcommittee notes that although Mr. Gingrich received two letters of reproval from the Committee on Standards regarding the use of outside resources, Ms. Fortson’s activities ceased prior to the date the Committee issued those letters to Mr. Gingrich. While the Subcommittee did not find that Ms. Fortson’s individual activities violated House Rules, the Subcommittee determined that the regular, routine, and ongoing assistance she provided Mr. Gingrich and his staff over a tenmonth period could create the appearance of improper commingling of unofficial and official resources. The Subcommittee determined, however, that these activities did not warrant inclusion as a Count in the Statement of Alleged Violation. XI. AVAILABILITY OF DOCUMENTS TO INTERNAL REVENUE SERVICE In light of the possibility that documents which were produced to the Subcommittee during the Preliminary Inquiry might be useful to the IRS as part of its reported ongoing investigations of various 501(c)(3) organizations, the Subcommittee decided to recommend that the full Committee make available to the IRS all relevant documents produced during the Preliminary Inquiry. It is the Committee’s recommendation that the House Committee on Standards of Official Conduct in the 105th Congress establish a liaison with the IRS to fulfill its recommendation and that this liaison be established in consultation with Mr. Cole. APPENDIX SUMMARY OF LAW PERTAINING TO ORGANIZATIONS EXEMPT FROM FEDERAL INCOME TAX UNDER SECTION 501(c)(3) OF THE INTERNAL REVENUE CODE A. Introduction Section 501(a) of the Internal Revenue Code generally exempts from federal income taxation numerous types of organizations. Among these are section 501(c)(3) organizations which include corporations:. I.R.C. § 501(c)(3). Organizations described in section 501(c)(3) are generally referred to as ‘‘charitable’’ organizations and contributions to such organizations are generally deductible to the donors. I.R.C. § 170(a)(1), (c)(2). B. The Organizational Test and the Operational Test The requirement that a 501(c)(3) organization be ‘‘organized and operated exclusively’’ for an exempt purpose has given rise to an ‘‘organizational test’’ and an ‘‘operational test.’’ Failure to meet either test will prevent an organization from qualifying for exemption under section 501(c)(3). Treas. Reg. § 1.501(c)(3)–1(a); Levy Family Tribe Foundation v. Commissioner, 69 T.C. 615, 618 (1978). 1. ORGANIZATIONAL TEST To satisfy the organizational test, an organization must meet three sets of requirements. First, its articles of organization must: (a) limit its purposes to one or more exempt purposes, and (b) not expressly permit substantial activities that do not further those exempt purposes. Treas. Reg. §1.501(c)(3)–1(b)(1). Second, the articles must not permit: (a) devoting more than an insubstantial part of its activities to lobbying, (b) any participation or intervention in the campaign of a candidate for public office, and (c) objectives and activities that would characterize it as an ‘‘action’’ organization. Treas. Reg. §1.501(c)(3)–1(b)(3). Third, the organization’s assets must be dedicated to exempt purposes. Treas. Reg. §1.501(c)(3)– (99) 100 1(b)(4). The IRS determines compliance with the organizational test solely by reference to an organization’s articles of organization. 2. OPERATIONAL TEST To satisfy the operational test, an organization must be operated ‘‘exclusively’’ for an exempt purpose. Though ‘‘exclusively’’ in this context does not mean ‘‘solely,’’ the presence of a substantial nonexempt purpose will cause an organization to fail the operational test. Treas. Reg. §1.501(c)(3)–1(c)(1); The Nationalist Movement v. Commissioner, 102 T.C. 558, 576 (1994). The presence of a single non-exempt purpose, if substantial in nature, will destroy the exemption regardless of the number or importance of truly exempt purposes. Better Business Bureau of Washington, D.C. v. United States, 326 U.S. 276, 283 (1945); Manning Association v. Commissioner, 93 T.C. 596, 611 (1989). To meet the operational test under section 501(c)(3) organization, the organization must satisfy the following requirements: 90 1. The organization must be operated for an exempt purpose, and must serve a public benefit, not a private benefit. Treas. Reg. §1.501(c)(3)–1(d)(1)(ii). 2. It must not be an ‘‘action’’ organization. Treas. Reg. §1.501(c)(3)–1(c)(3). An organization is an ‘‘action’’ organization if: a. it participates or intervenes in any political campaign (Treas. Reg. §1.501(c)(3)–1(c)(3)(iii)); b. a substantial part of its activities consists of attempting to influence legislation (Treas. Reg. §1.501(c)(3)– 1(c)(3)(ii)); or c. its primary objective may be attained: only by legislation or defeat of proposed legislation, and it advocates the attainment of such primary objective (Treas. Reg. §1.501(c)(3)–1(c)(3)(iv)). 3. Its net earnings must not inure to the benefit of any person in a position to influence the organization’s activities. Treas. Reg. §1.501(c)(3)–1(c)(2). ‘‘[F]ailure to satisfy any of the [above] requirements is fatal to [an organization’s] qualification under section 501(c)(3).’’ American Campaign Academy v. Commissioner, 92 T.C. 1053, 1062 (1989). The application of these requirements, moreover, is a factual exercise. Id. at 1064; Christian Manner International v. Commissioner, 71 T.C. 661, 668 (1979). Thus, in testing compliance with the operational test, courts look ‘‘beyond the four corners of the organization’s charter to discover ‘the actual objects motivating the organization and the subsequent conduct of the organization.’ ’’ American Campaign Academy, 92 T.C. at 1064 (citing Taxation with Representation v. United States, 585 F.2d 1219, 1222 (4th Cir. 1978)); see also Sound Health Association v. Commissioner, 71 T.C. 158, 184 (1978) (‘‘It is the purpose toward which an organization’s activities are directed that is ultimately dispositive of the organization’s right to be classified as a section 501(c)(3) organization.’’) 90 501(c)(3) organizations must also: (a) not be operated primarily to conduct an unrelated trade or business (Treas. Reg. §1.501(c)(3)–1(e)(1)), and (b) not violate ‘‘public policy.’’ See Bob Jones University v. United States, 461 U.S. 574 (1983) (educational organization’s tax-exempt status denied because of its racially discriminatory policies). 101 ‘‘What an organization’s purposes are and what purposes its activities support are questions of fact.’’ American Campaign Academy, 92 T.C. at 1064 (citing Christian Manner International v. Commissioner, 71 T.C. 661, 668 (1979)). Courts may ‘‘draw factual inferences’’ from the record when determining whether organizations meet the requirements of the tax-exempt organization laws and regulations. Id. (citing National Association of American Churches v. Commissioner, 82 T.C. 18, 20 (1984)). a. ‘‘Educational’’ Organizations May Qualify for Exemption Under Section 501(c)(3) As discussed above, an organization may qualify for exemption under section 501(c)(3) if it is ‘‘educational.’’ 91 The Regulations define the term ‘‘educational’’ as relating to: (a) [t]he instruction or training of the individual for the purpose of improving or developing his capabilities; or (b) [t]he instruction of the public on subjects useful to the individual and beneficial to the community. Treas. Reg. §1.501(c)(3)–1(d)(3)(i). The Regulations continue:. Id. Guidance on the phrase ‘‘advocates a particular position or viewpoint’’ can be found in the preceding section in the Regulations pertaining to the definition of ‘‘charitable.’’ The fact that an organization, in carrying out its primary purpose, advocates social or civil changes or presents opinion on controversial issues with the intention of molding public opinion or creating public sentiment to an acceptance of its views does not preclude such organization from qualifying under section 501(c)(3) so long as it is not an ‘‘action’’ organization.* * * Treas. Reg. §1.501(c)(3)–1(d)(2). In applying the Regulations under section 501(c)(3) pertaining to educational organizations, the IRS has stated that its goal is to eliminate or minimize the potential for any public official to impose his or her preconceptions or beliefs in determining whether the particular viewpoint or position is educational. Rev. Proc. 86–43, 1986–2 C.B. 729. IRS policy is to ‘‘maintain a position of disinterested neutrality with respect to the beliefs advocated by an organization.’’ Id. The focus of the Regulations pertaining to educational organizations and of the IRS’s application of these Regulations ‘‘is not upon the viewpoint or position, but instead upon the 91 An organization may also qualify for section 501(c)(3) exemption if it is organized and operated for, e.g., ‘‘religious,’’ ‘‘charitable,’’ or ‘‘scientific’’ purposes. The other methods by which an organization can qualify for exemption are not discussed in this summary. 102 method used by the organization to communicate its viewpoint or positions to others.’’ Id. Two court decisions considered challenges to the constitutionality of the definition of ‘‘educational,’’ in the Regulations cited above. One decision held that the definition was unconstitutionally vague. Big Mama Rag, Inc. v. United States, 631 F.2d 1030 (D.C. Cir. 1980). In National Alliance v. United States, 710 F.2d 868 (D.C. Dir. 1983), the court upheld the IRS’s position that the organization in question was not educational. Without ruling on the constitutionality of the ‘‘methodology test’’ used by the IRS in that case to determine whether the organization was educational, the court found that the application of that test reduced the vagueness found in Big Mama Rag. The IRS later published the methodology test in Rev. Proc. 86–43 in order to clarify its position on how to determine whether an organization is educational when it advocates particular viewpoints or positions. As set forth in the Revenue Procedure: The presence of any of the following factors in the presentations made by an organization is indicative that the method used by the organization to advocate its viewpoints or positions is not educational. (a) The presentation of viewpoints or positions unsupported by facts is a significant portion of the organization’s communications. (b) The facts that purport to support the viewpoints or positions are distorted. (c) The organization’s presentations make substantial use of inflammatory and disparaging terms and express conclusions more on the basis of strong emotional feelings than of objective evaluations. (d) The approach used in the organization’s presentations is not aimed at developing an understanding on the part of the intended audience or readership because it does not consider their background or training in the subject matter. According to Rev. Proc. 86–43, the IRS uses the methodology test in all situations where the educational purpose of an organization that advocates a viewpoint or position is in question. However, ‘‘[e]ven if the advocacy undertaken by an organization is determined to be educational under [the methodology test], the organization must still meet all other requirements for exemption under section 501(c)(3) * * *’’ Rev. Proc. 86–43. That is, organizations deemed to be ‘‘educational’’ must also abide by the section 501(c)(3) prohibitions on: (a) private benefit, (b) participating or intervening in a political campaign, (c) engaging in more than insubstantial lobbying activities, and (d) private inurement. b. To Satisfy the Operational Test, an Organization Must Not Violate the ‘‘Private Benefit’’ Prohibition Section 501(c)(3) requires, inter alia, that an organization be organized and operated exclusively for one or more exempt purposes. Treas. Reg. 1.501(c)(3)–1(d)(1)(ii) provides that an organization does not meet this requirement: 103 unless it serves a public rather than a private purpose. Thus, * * * it is necessary for an organization to establish that it is not organized or operated for the benefit of private interests such as designated individuals, the creator or his family, shareholders of the organization, or persons controlled, directly or indirectly, by such private interests. The ‘‘private benefit’’ prohibition serves to ensure that the public subsidies flowing from section 501(c)(3) status, including income tax exemption and the ability to receive tax-deductible charitable contributions, are reserved for organizations that are formed to serve public and. The Regulations and cases applying them make it clear that the private benefit test focuses on the purpose or purposes served by an organization’s activities, and not on the nature of the activities themselves. See, e.g., B.S.W. Group, Inc. v. Commissioner, 70 T.C. 352 (1978). Where an organization’s activities serve more than one purpose, each purpose must be separately examined to determine whether it is private in nature and, if so, whether it is more than insubstantial. Christian Manner International v. Commissioner, 71 T.C. 661 (1979). The leading case on the application of the private benefit prohibition in the context of an organization whose activities served both exempt and nonexempt purposes is Better Business Bureau v. United States, 326 U.S. 279 (1945). Better Business Bureau was a nonprofit organization formed to educate the public about fraudulent business practices, to elevate business standards, and to educate consumers to be intelligent buyers. The Court did not question the exempt purpose of these activities. The Court found, however, that the organization was ‘‘animated’’ by the purpose of promoting a profitable business community, and that such business purpose was both nonexempt and more than insubstantial. The Court denied exemption, stating (in language that is cited in virtually all later private benefit cases), that: [I]n order to fall within the claimed exemption, an organization must be devoted to educational purposes exclusively. This plainly means that the presence of a single noneducational purpose, if substantial in nature, will destroy the exemption regardless of the number or importance of truly educational purposes. Id. at 283. Many of the cases interpreting the private benefit prohibition involve private benefits that are provided in a commercial context— as in the Better Business Bureau case. Impermissible private benefit, however, need not be financial in nature. Callaway Family Association v. Commissioner, 71 T.C. 340 (1978), involved a family as- 104 sociation formed as a nonprofit corporation to study immigration to and migration within the United States by focusing on its own family history and genealogy. The organization’s activities included researching the genealogy of Callaway family members in order to publish a family history. The organization argued that its purposes were educational and intended to benefit the general public, asserting that its use of a research methodology focusing on one family’s development was a way of educating the public about the country’s history. In Callaway, the court noted (and the IRS conceded) that the organization’s activities served an educational purpose. The issue was not whether the organization had any exempt purposes, but whether it also engaged in activities that furthered a nonexempt purpose more than insubstantially. Agreeing with the IRS that ‘‘petitioner aimed its organizational drive at Callaway family members, and appealed to them on the basis of their private interests,’’ the court concluded that the organization ‘‘engages in nonexempt activities serving a private interest, and these activities are not insubstantial.’’ Id. at 343–44. Accordingly, the court held that the organization did not qualify for exemption under section 501(c)(3). Kentucky Bar Foundation v. Commissioner, 78 T.C. 921 (1982), is one of the relatively few cases in which a court found private benefit to be insubstantial and therefore not to preclude exemption under section 501(c)(3). The Kentucky Bar Foundation was formed to conduct a variety of activities recognized by the IRS to serve exclusively educational purposes, including a continuing legal education program and the operation of a public law library. The IRS, however, asserted that the Foundation’s operation of statewide lawyer referral service also served private purposes. Through the referral service, a person seeking a lawyer was referred to an attorney selected on a rotating basis within a convenient geographic area. The fee for an initial half-hour consultation was $10; any charge for further consultation or work had to be agreed upon by the attorney and the client. The court found that the purposes of the referral service were to assist the general public in locating an attorney to provide a consultation for a reasonable fee, to encourage lawyers to recognize the obligation to provide legal services to the general public, and to acquaint people in need of legal services with the value of consultation with a lawyer to identify and solve legal problems. The IRS asserted that a purpose of the referral service was to benefit lawyers, particularly to help young law school graduates establish a practice, and that this was a substantial nonexempt purpose. Based on a careful examination of the facts, however, the court found that: [t]he referral service is open to all responsible attorneys, and there is no evidence a selected group of attorneys are the primary beneficiaries of the service. The referral service is intended to benefit the public and not to serve as a source of referrals. We find any nonexempt purpose served by the referral service and any occasional economic benefit 105 flowing to individual attorneys through a referral incidental to the broad charitable purpose served. Id. at 926. Reiterating the proposition that ‘‘the proper focus is the purpose or purposes toward which the activities are directed,’’ the court found that the purpose of the legal referral service was to benefit the public, that any private benefit was broadly distributed, not conferred on any select group of attorneys and incidental to the public purpose, and that the organization qualified for exemption under section 501(c)(3). Id. at 923, 925–26 (citing B.S.W. Group v. Commissioner, 70 T.C. 352, 356–57 (1978)). As the cases described above show, the determination as to whether private benefit is incidental (and therefore permissible) or more than incidental (and therefore prohibited) is inherently factual, and each case must be decided on its own facts and circumstances. See also Manning Association v. Commissioner, 93 T.C. 596 (1989). The IRS has issued several published and private rulings and general counsel memoranda 92 that further explain the private benefit prohibition. For example, in Rev. Rul. 70–186, 1970–1 C.B. 128, an organization was formed to preserve a lake as a public recreational facility and to improve the lake water’s condition. Although the organization’s activities benefited the public at large, there were necessarily significant benefits to the individuals who owned lake-front property. The IRS, however, determined that the private benefit to the lake-front property owners was incidental because: [t]he benefits to be derived from the organization’s activities flow principally to the general public through the maintenance and improvement of public recreational facilities. Any private benefits derived by the lakefront property owners do not lessen the public benefits flowing from the organization’s operations. In fact, it would be impossible for the organization to accomplish its purposes without providing benefits to the lakefront property owners. Id. In Rev. Rul. 75–196, 1975–1 C.B. 155, the IRS ruled that a 501(c)(3) organization operating a law library whose rules essentially limited access and use to local bar association members conferred only incidental benefits to the bar association members. The library’s availability only to a designated class of persons was not a bar to recognition of exemption because: [w]hat is of importance is that the class benefited be broad enough to warrant a conclusion that the educational facility or activity is serving a broad public interest rather 92 Private letter rulings and general counsel memoranda are made available to the public under section 6110 of the Code. These documents are based on the facts of particular cases, and may not be relied on as precedent. However, they provide useful insights as to how the IRS interprets and applies the law in particular factual situations. 106 than a private interest, and is therefore exclusively educational in nature. Id. The library was available to a significant number of people, and the restrictions on the library’s use were due to the limited size of its facilities. Although attorneys who used the library might derive personal benefit in their practice, the IRS ruled that this benefit was incidental to the library’s exempt purpose and a ‘‘logical byproduct of an educational process.’’ Id. Two other revenue rulings with similar fact patterns are also helpful in understanding the application of the ‘‘incidental benefits’’ concept. In one ruling, the IRS ruled that an organization that limited membership to the residents of one city block did not qualify as a 501(c)(3) organization because the organization’s members benefited directly, thus not incidentally, from the organization’s activities. Rev. Rul. 75–286, 1975–2 C.B. 210. In another, the IRS ruled that an organization dedicated to beautification of an entire city qualified as a 501(c)(3) organization because benefits flowed to the city’s entire population and were not targeted to the organization’s members. Rev. Rul. 68–14, 1968–1 C. B. 243. The benefits to the organization’s members of living in a cleaner city were considered incidental. The IRS issued a recent warning about the importance of the private benefit prohibition in Rev. Proc. 96–32, 1996–20 I.R.B. 14, a Revenue Procedure issued for the purpose of establishing standards as to whether organizations that own and operate low income housing (an activity conducted by both nonprofit and for-profit organizations) may qualify for exemption under section 501(c)(3). After reviewing the substantive criteria that must be present to establish that the organization is formed for a charitable purpose, the IRS added a final caution: If an organization furthers a charitable purpose such as relieving the poor and distressed, it nevertheless may fail to qualify for exemption because private interests of individuals with a financial stake in the project are furthered. For example, the role of a private developer or management company in the organization’s activities must be carefully scrutinized to ensure the absence of inurement or impermissible private benefit resulting from real property sales, development fees, or management contracts. Id. One of the most detailed explanations of the private benefit prohibition is contained in G.C.M. 39862 (Nov. 22, 1991), involving the permissibility of a hospital’s transaction with physicians. In the G.C.M., the IRS explained the prohibition as follows: Any private benefit arising from a particular activity must be ‘‘incidental’’ 107 achieved without necessarily benefiting private individuals. Such benefits might also be characterized as indirect or unintentional. To be quantitatively incidental, a benefit must be insubstantial when viewed in relation to the public benefit conferred by the activity. Id. The IRS also explained that the insubstantiality of the private benefit is measured only in relationship to activity in which the private benefit is present, and not in relation to the organization’s overall activities: It bears emphasis that, even though exemption of the entire organization may be at stake, the private benefit conferred by an activity or arrangement is balanced only against the public benefit conferred by that activity or arrangement, not the overall good accomplished by the organization. Id. In G.C.M. 39862, the IRS balanced the private benefits to the physicians from the transaction at issue with the public purposes served by that particular activity—and not the public purposes served by the hospital as a whole. Finding the private purposes from the activity at issue to be more than incidental in relation to the public purposes, the IRS determined that the hospital had jeopardized its exemption under section 501(c)(3). Although most of the cases and IRS rulings (both public and private) follow the general analysis described above in determining whether or not private benefit is insubstantial, a fairly recent Tax Court case, American Campaign Academy v. Commissioner, 92 T.C. 1053 (1989) adopts a slightly different approach. In that case, the primary activity of American Campaign Academy (‘‘ACA’’ or ‘‘the Academy’’) was the operation of a school to train people to work in political campaigns. The IRS denied ACA’s application for exemption under section 501(c)(3), and ACA appealed the denial to the Tax Court. The Tax Court upheld the IRS’s denial of ACA’s application for exemption because ACA’s activities conferred an impermissible private benefit on Republican candidates and entities. The school operated by ACA was an ‘‘outgrowth’’ of programs the National Republican Congressional Committee (‘‘NRCC’’) once sponsored to train candidates and to train campaign professionals for Republican campaigns. The Academy program, however, differed from its NRCC predecessor in that it limited its students to ‘‘campaign professionals.’’ Id. at 1056. Without discussion, the IRS stated that the Academy did not train candidates, participate in any political campaign or attempt to influence legislation. Id. at 1056–57. The Academy did not use training materials developed by the NRCC, generally did not use NRCC faculty, and developed its own courses. Id. at 1057. Students were not explicitly required to be affiliated with any particular party, nor were they required to take positions with partisan organizations upon graduation. Id. at 1058. The Academy had a number of direct and indirect connections to Republican organizations. The NRCC contributed furniture and computer hardware to the Academy. Id. at 1056. One of the Acad- 108 emy’s three directors, Joseph Gaylord, was the Executive Director of the NRCC; another director, John McDonald, was a member of the Republican National Committee. Id. Jan Baran, General Counsel of the NRCC at the time of the Academy’s application to IRS, incorporated the Academy. Id. at 1070. The National Republican Congressional Trust funded the Academy. Id. The Academy curriculum included studies of the ‘‘Growth of NRCC, etc.’’ and ‘‘Why are people Republicans,’’ but did not contain comparable studies pertaining to the Democratic or other political parties. Id. at 1070– 71. People on the admissions panel were affiliated with the Republican Party. Id. at 1071. Furthermore, while the applicants were not required to declare a party affiliation on their application, the political references students were required to submit ‘‘often permit[ted] the admission panel to deduce the applicant’s political affiliation.’’ Id. Finally, the Court found that all but one of the Academy graduates who could be identified as later serving in political positions ended up serving Republican candidates or Republican organizations. Id. at 1060, 1071, 1072. In light of these facts, the Tax Court upheld the IRS’s denial of the Academy’s application for exemption under section 501(c)(3) because the Academy ‘‘conducted its educational activities with the partisan objective of benefiting Republican candidates and entities.’’ Id. at 1070. Any one of the facts listed in the previous paragraph did not alone support the IRS’s finding or the court’s holding that the Academy was organized for a non-exempt purpose. The IRS did not argue, and the court did not hold, for example, that individuals who are all members of the same political party are prohibited from operating a 501(c)(3) organization, or that an organization may not receive an exemption under section 501(c)(3) if a partisan organization funds it. Rather, the Tax Court focused on the purpose behind ACA’s activities. In determining this, it drew ‘‘factual inferences’’ from the record to discern that purpose. Those inferences led to the court’s conclusion that the Academy ‘‘targeted Republican entities and candidates to receive the secondary benefit through employing its alumni * * *.’’ Id. at 1075. The Tax Court’s analysis distinguished between ‘‘primary’’ private benefit and ‘‘secondary’’ private benefit, and made clear that the latter can be a bar to section 501(c)(3) qualification. In this case, the students received the primary private benefit of the Academy, and this benefit was permissible and consistent with the Academy’s educational purposes. The students’ ultimate employers, Republican candidates and entities, received the secondary benefits of the Academy. ‘‘[W]here the training of individuals is focused on furthering a particular targeted private interest [e.g., Republican candidates and entities], the conferred secondary benefit ceases to be incidental to the providing organization’s exempt purposes.’’ Id. at 1074. For the Academy to have prevailed, according to the Tax Court, it needed to demonstrate: (1) that the candidates and entities who received the benefit of trained campaigned workers possessed the characteristics of a ‘‘charitable class,’’ 93 and (2) that it did not dis93 This part of the Tax Court’s analysis in American Campaign Academy has been criticized by a few commentators, who have disagreed with the court’s application of the ‘‘charitable class’’ doctrine in the context of an educational organization. See, e.g., Bruce R. Hopkins, Republican 109 tribute benefits among that class in a select manner. Id. at 1076. The Academy argued that Republican candidates and entities were ‘‘charitable’’ because the Republican party consists of millions of people with ‘‘like ‘political sympathies’ ’’ and their activities benefited the community at large. Id. The Court ruled, however, that size alone does not transform a benefited class into a charitable class and that ACA had failed to demonstrate that political entities and candidates possessed the characteristics of a charitable class. Id. At 1077. Moreover, the Tax Court held that even if political candidates and entities could be found to constitute a ‘‘charitable class,’’ ACA’s benefits were distributed in a select manner to Republican candidates and entities. Id. Finally, the Academy argued that although it hoped that alumni would work in Republican organizations or for Republican candidates, it had no control over whether they would do so. Absent an ability to control the students’ employment, the Academy argued, it lacked the ability to confer secondary benefits to Republican candidates and entities. Id. at 1078. The Court found that there was no authority for the proposition that the organization must be able to control non-incidental benefits. Furthermore, the Court reiterated that the record supported the IRS’s determination that the Academy was formed ‘‘with a substantial purpose to train campaign professionals for service in Republican entities and campaigns, an activity previously conducted by NRCC.’’ Id. According to the Court, accepting the Academy’s argument regarding its inability to control non-incidental benefits would ‘‘cloud the focus of the operational test, which probes to ascertain the purpose towards which an organization’s activities are directed and not the nature of the activities themselves.’’ Id. at 1078–79 (citing B.S.W. Group v. Commissioner, 70 T.C. 352, 356–57 (1978)). The Court noted that had the record demonstrated that ‘‘the Academy’s activities were nonpartisan in nature and that its graduates were not intended to primarily benefit Republicans,’’ the Court would have found for the Academy. Id. at 1079. The American Campaign Academy case follows existing precedent. In reaching its decision, the court relies on Better Business Bureau and Kentucky Bar Foundation, among other cases, for the legal standards governing the private benefit prohibition. The court recognizes that the ACA’s activities were intended to serve multiple purposes, including the education of students (the permissible primary benefit) and the provision of trained campaign professionals for candidates and entities (the secondary benefit). Finding the secondary benefit to be targeted to a select group—Republican candidates and entities—the court concludes that such benefit is more Campaign School Held Not Tax Exempt, The Nonprofit Counsel, July 1989, at 3; Laura B. Chisolm, Politics and Charity: A Proposal for Peaceful Coexistence, 58 Geo. Wash. L. Rev. 308, 344 n.159 (1990). Typically an educational organization is expected to serve a broad class representative of the public interest, but not a ‘‘charitable class’’ per se. The court’s consideration of the question as to whether political candidates and entities could constitute a charitable class might be misplaced, but is not critical to its holding. As the court notes, ‘‘even were we to find political entities and candidates to generally comprise a charitable class, petitioner would bear the burden of proving that its activities benefited the members of the class in a nonselect manner.’’ The court’s finding that such benefits were conferred in a select manner—to Republican candidates and entities—was the basis for its holding that the organization served private purposes more than incidentally and, therefore, failed to qualify for exemption under section 501(c)(3). 110 than incidental and therefore precludes exemption under section 501(c)(3). c. To Satisfy The Operational Test, An Organization Must Not Be An ‘‘Action’’ Organization An organization is not operated exclusively for one or more exempt purposes if it is an ‘‘action’’ organization. Treas. Reg. §1.501(c)(3)–1(c)(3). Such an organization cannot qualify for exemption under section 501(c)(3). Treas. Reg. §1.501(c)(3)–1(c)(3)(v). An organization is an action organization if: (i) It ‘‘participates or intervenes, directly or indirectly, in any political campaign on behalf of or in opposition to any candidate for public office;’’ (ii) a ‘‘substantial part’’ of its activities consists of ‘‘attempting to influence legislation by propaganda, or otherwise;’’ or (iii) its primary objective may be attained ‘‘only by legislation or a defeat of proposed legislation,’’ and ‘‘it advocates, or campaigns for, the attainment’’ of such primary objective. Treas. Reg. §1.501(c)(3)–1(c)(3). (i) If an Organization Participates in a Political Campaign, It is an Action Organization Not Entitled to Exemption Under Section 501(c)(3) Section 501(c)(3) provides that an organization is not entitled to exemption if it ‘‘participate[s] in, or intervene[s] in (including the publishing or distributing of statements) any political campaign on behalf of (or in opposition to) any candidate for public office.’’ The reason for this prohibition is clear. Contributions to section 501(c)(3) organizations are deductible for federal income tax purposes, but contributions to candidates and political action committees (‘‘PACs’’) are not. The use of section 501(c)(3) organizations to support or oppose candidates or PACs would circumvent federal tax law by enabling candidates or PACs to attract tax-deductible contributions to finance their election activities. As the U.S. Court of Appeals for the Tenth Circuit explained, ‘‘[t]he limitations in Section 501(c)(3) stem from the congressional policy that the United States Treasury should be neutral in political affairs and that substantial activities directed to attempts to * * * affect a political campaign should not be subsidized.’’ Christian Echoes National Ministry, Inc. v. United States, 470 F.2d 849, 854 (1972), cert. denied, 419 U.S. 1107 (1975) (emphasis in original). The prohibition on political campaign intervention was added to the Internal Revenue Code as a floor amendment to the 1954 Revenue Act offered by Senator Lyndon Johnson, who believed that a section 501(c)(3) organization was being used to help finance the campaign of an opponent. In introducing the amendment, Senator Johnson said that it was to ‘‘deny[] tax-exempt status to not only those people who influence legislation but also to those who intervene in any political campaign on behalf of any candidate for any public office.’’ 100 Cong. Rec. 9604 (1954) (discussed in Bruce R. Hopkins, ‘‘The Law of Tax-Exempt Organizations,’’ 327 (6th ed. 1992)). No congressional hearing was held on the subject and the conference report did not contain any analysis of the provision. Ju- 111 dith E. Kindell and John F. Reilly, ‘‘Election Year Issues,’’ 1993 Exempt Organizations Continuing Professional Education Technical Instruction Program 400, 401 (hereinafter ‘‘IRS CPE Manual’’). 94 Although the prohibition on political campaign intervention was not formally added to section 501(c)(3) until 1954, the concept that charities should not participate in political campaigns was not new. As the Second Circuit noted, ‘‘[t]his provision merely expressly stated what had always been understood to be the law. Political campaigns did not fit within any of the specified purposes listed in [Section 501(c)(3)].’’ The Association of the Bar of the City of New York v. Commissioner, 858 F.2d 876, 879 (2d Cir. 1988) (hereinafter ‘‘New York Bar’’) (quoting 9 Mertens, Law of Federal Income Taxation §34.05 at 22 (1983)). 95 Furthermore, congressional concerns that the government not subsidize political activity have existed since at least the time when Judge Learned Hand wrote ‘‘[p]olitical agitation * * * however innocent the aim * * * must be conducted without public subvention * * *.’’ Slee v. Commissioner, 42 F.2d 184, 185 (2d Cir. 1930), quoted in New York Bar, 858 F.2d at 879. In 1987, Congress amended section 501(c)(3) to clarify that the prohibition on political campaign activity applied to activities in opposition to, as well as on behalf of, any candidate for public office. Omnibus Budget Reconciliation Act, Pub. L. No. 100–203, §10711, 101 Stat. 1330, 1330–464 (1987). The House Report accompanying the bill stated that ‘‘[t]he prohibition on political campaign activities * * * reflect[s] congressional policies that the U.S. Treasury should be neutral in political affairs * * *.’’ H.R. Rep. No. 100–391, at 1625 (1987); see also S. Rep. No. 91–552, at 46–49 (Tax Reform Act of 1969) (interpreting section 501(c)(3) to mean that ‘‘no degree of support for an individual’s candidacy for public office is permitted’’). The scope of the prohibition on political campaign intervention has been the subject of much discussion. While certain acts are clearly proscribed, others may be permissible or prohibited, depending on the purpose and effect of the activity. The regulations interpreting the prohibition add little to the statutory definition:. §1.501(c)(3)–1(c)(3)(iii). Under this provision, a section 501(c)(3) organization is prohibited from making a written or oral endorsement of a candidate and from distributing partisan cam94 The 1993 Exempt Organizations Continuing Professional Education (CPE) Technical Instruction Program text was prepared by the IRS Exempt Organizations Division for internal training purposes. 95 Indeed, under the common law of charitable trusts—the genesis of modern day section 501(c)(3)—it was recognized that ‘‘a trust to promote the success of a particular political party is not charitable,’’ for the reason that ‘‘there is no social interest in the underwriting of one or another of the political parties.’’ Restatement (Second) of Trusts §374 (1959). The continued importance of the common law doctrine of ‘‘charitability’’ to the standards for exemption under section 501(c)(3) is reflected in the Supreme Court decision in Bob Jones University v. United States, 461 U.S. 574 (1983), in which the Supreme Court denied exemption to a private university that practiced racial discrimination, on the ground that racial discrimination was contrary to public policy and therefore inconsistent with the common law standards for charitability. 112 paign literature. IRS CPE Manual at 410. Following the enactment of section 527 of the Code in 1974 (governing the federal tax treatment of PACs), the prohibition also prevents section 501(c)(3) organizations from establishing or supporting a PAC. IRS CPE Manual at 437. (The application of the prohibition in this context is discussed further below.) It is clear, however, that section 501(c)(3) organizations also may violate the prohibition by engaging in activity that falls short of a direct endorsement, and even may—on its face—appear neutral, if the purpose or effect of the activity is to support or oppose a candidate. The IRS CPE Manual describes a variety of situations in which section 501(c)(3) organizations may violate the prohibition without engaging in a direct candidate endorsement, including inviting a particular candidate to make an appearance at an organization event, holding candidate forums or distributing voter guides which evidence a bias for or against a candidate, and similar activities that may support or oppose a particular candidate. IRS CPE Manual at 419–424, 430–432. In a recent election year news release, the IRS reminded 501(c)(3) organizations of the breadth of the prohibition, stating not only that they cannot endorse candidates or distribute statements in support of or opposition to candidates, but also that they cannot ‘‘become involved in any other activities that may be beneficial or detrimental to any candidate.’’ IRS News Release IR–96–23 (Apr. 24, 1996). While it is easy for the IRS to determine whether the prohibition on political campaign intervention has been violated when a section 501(c)(3) organization endorses a candidate or distributes partisan campaign literature, it is more difficult to determine whether there is a violation if the activity at issue is not blatant or serves a nonpolitical purpose as well. The IRS relies on a ‘‘facts and circumstances’’ test in analyzing ambiguous behavior to determine whether there has been a violation. According to the IRS: [i]n situations where there is no explicit endorsement or partisan activity, there is no bright-line test for determining if the IRC 501(c)(3) organization participated or intervened in a political campaign. Instead, all the facts and circumstances must be considered. IRS CPE Manual at 410. Despite the lack of bright-line standards concerning all aspects of the prohibition, there is a substantial body of authority concerning what section 501(c)(3) organizations can and cannot do, and many section 501(c)(3) organizations have little difficulty applying existing precedents to develop internal guidelines for what activities are permissible and prohibited. For example, the Office of General Counsel of the United States Catholic Conference issued guidelines on political activities to Catholic organizations on February 14, 1996, in anticipation of the 1996 election season. 96 The guidelines outline the parameters of permissible activity, including unbi96 Some churches assert that they have a First Amendment right to participate in political campaign activities where doing so furthers their religious beliefs. However, courts have ruled that tax exemption is a privilege and not a right, and that section 501(c)(3) does not prohibit churches from participating in political campaigns but merely provides that they will not be entitled to tax exemption if they do so. See, e.g., Christian Echoes National Ministry, Inc. v. United States, 470 F.2d 849 (10th Cir. 1972). 113 ased voter education, nonpartisan get-out-the-vote drives, and nonpartisan public forums. They also describe what activity is prohibited, including the endorsement of candidates, the distribution of campaign literature in support or opposition to candidates, and the provision of financial and in-kind support to candidates or PACs. With respect to the latter, the guidelines state flatly that: [A] Catholic organization may not provide financial support to any candidate, PAC, or political party. Likewise, it may not provide or solicit in-kind support, such as free or selective use of volunteers, paid staff, facilities, equipment, mailing lists, etc. ‘‘Political Activity Guidelines for Catholic Organizations’’ (United States Catholic Conference, Office of the General Counsel, Washington, D.C.), Feb. 14, 1996, reprinted in Paul Streckfus’ EO Tax Journal, November 1996 at 35, 42. The generally accepted aspects of the campaign intervention prohibition, as well as some areas of uncertainty, are discussed below. (a) The Prohibition Is ‘‘Absolute’’ The prohibition on political campaign intervention or participation is ‘‘absolute.’’ IRS CPE Manual at 416. Unlike the prohibition on lobbying, there is no requirement that political campaign participation or intervention be substantial. New York Bar, 858 F.2d at 881. It is, therefore, irrelevant that the majority, or even all but a small portion, of an organization’s activities would, by themselves, support exemption under section 501(c)(3). United States v. Dykema, 666 F.2d 1096, 1101 (7th Cir. 1981); see also G.C.M. 39694 (Jan. 22, 1988) (‘‘An organization described in section 501(c)(3) is precluded from engaging in any political campaign activities’’) and P.L.R. 9609007 (Dec. 6, 1995). (‘‘For purposes of section 501(c)(3), intervention in a political campaign may be subtle or blatant. It may seem to be justified by the press of events. It may even be inadvertent. The law prohibits all forms of participation or intervention in ‘any’ political campaign.’’) 97 Although the prohibition on political campaign intervention under section 501(c)(3) is absolute, Congress recognized that the sanction of loss of tax exemption could, in some cases, be disproportionate to the violation. In 1987, Congress added section 4955 to the Code, which imposes excise tax penalties on section 501(c)(3) organizations that make ‘‘political expenditures’’ in violation of the prohibition, as well as organization managers who knowingly approve such expenditures. The legislative history provides that the enactment of section 4955 was not intended to modify the absolute 97 See also G.C.M. 38137 (Oct. 22, 1979): [T]he prohibition on political activity makes no reference to the intent of the organization. An organization can violate the proscription even if it acts for reasons other than intervening in a political campaign. For example, an organization that hires a political candidate to do commercials for its charity drive and runs the commercials frequently during the political campaign may have no interest in supporting the candidate’s campaign. Nevertheless, its action would constitute, at least, indirect intervention or support of the political campaign. However, the same G.C.M. goes on to say: We do not mean to imply that every activity that has an effect on a political campaign is prohibited political activity. We recognize that organizations may inadvertently support political candidates. In these instances the organizations have not ‘‘intervened’’ or ‘‘participated’’ in political campaigns. A hospital that provides emergency health care for a candidate acts on behalf of the candidate during the election, but only inadvertently supports his campaign. 114 prohibition of section 501(c)(3), but to provide an alternative remedy that could be used by the IRS in cases where the penalty of revocation seems disproportionate to the violation: i.e., where the expenditure was unintentional and involved only a small amount and where the organization subsequently has adopted procedures to assure that similar expenditures would not be made in the future. H.R. Rep. No. 100–391, at 1623–24 (1987). The legislative history also provides that the excise tax may be imposed in cases involving significant, uncorrected violations of the prohibition, where revocation alone may be ineffective because the organization has ceased operations after diverting its assets to an improper purpose. In these cases, the excise tax penalty on organization managers may be the only effective way to penalize the violation. Id. at 1624–25. The IRS has shown an inclination to impose the excise tax under section 4955 in lieu of revocation of exemption in cases where the violation appears to be minor in relation to the organization’s other exempt purpose activities.98 For example, P.L.R. 9609007 (Dec. 6, 1995) involved a section 501(c)(3) organization that sent out a fundraising letter linking the organization to issues raised in the particular campaigns. The IRS concluded that the letters evidenced a bias for one candidate over the other. The organization sought to defend itself by saying only a few of the letters were sent to the states whose elections were mentioned in the letters. The IRS rejected this defense, stating that: [I]t is common knowledge that in recent times the primary source of a candidate’s support in such elections is often derived from out-of-state sources. Although a particular reader may not have been eligible to actually vote for the described candidate, he or she could have been charged by [the organization], in our view, to participate in the candidate’s campaign through direct monetary or in-kind support, volunteerism, molding of public opinion, or the like. Id. The IRS found that the organization violated the political campaign intervention prohibition and imposed an excise tax on the organization under section 4955; it did not, however, propose revocation of the organization’s exemption under section 501(c)(3). 98 Prior to the enactment of section 4955 in 1987, the IRS was reluctant to impose revocation in cases where the violation was not blatant and the organization had a record of otherwise charitable activities. For example, P.L.R. 8936002 (May 24, 1989) involved a section 501(c)(3) organization that engaged in voter education and issue advocacy relating to the 1984 Presidential election. Describing the case as ‘‘a very close call,’’ the IRS ‘‘reluctantly’’ concluded that the organization’s voter education activities did not constitute prohibited political campaign intervention, despite the use of ‘‘code words’’ that could be viewed as evidencing support for a particular candidate. The IRS appeared unwilling to seek revocation with respect to the organization, probably because of its history of legitimate educational activities. Had section 4955 been in effect when the activity took place, the IRS would have had another enforcement alternative: it could have imposed excise tax penalties on the organization’s expenditures for the activities it found so troublesome. 115 (b) Section 501(c)(3) Organizations May Not Establish or Support a PAC Although organizations exempt from tax under some categories of section 501(c) are permitted to establish or support PACs,99 those exempt under section 501(c)(3) are not. When section 527 (governing the tax treatment of PACs) was added to the Code in 1974, the legislative history provided that ‘‘this provision is not intended to affect in any way the prohibition against certain exempt organizations (e.g., sec. 501(c)(3)) engaging in ‘electioneering’ * * *’’ S. Rep. No. 93–1357 (1974), reprinted in 1975–1 C.B. 517, 534. The regulations under section 527 reflect this congressional intent: Section 527(f) and this section do not sanction the intervention in any political campaign by an organization described in section 501(c) if such activity is inconsistent with its exempt status under section 501(c). For example, an organization described in section 501(c)(3) is precluded from engaging in any political campaign activities. The fact that section 527 imposes a tax on the exempt function income (as defined in section 1.527–2(c)) expenditures of section 501(c) organizations and permits such organizations to establish separate segregated funds to engage in campaign activities does not sanction the participation in these activities by section 501(c)(3) organizations. Treas. Reg. §1.527–6(g). Since the enactment of section 527 in 1974, it has been clear that a section 501(c)(3) organization will violate the prohibition on political campaign intervention by providing financial or nonfinancial support for a PAC. IRS CPE Manual at 438–40. While the use of a section 501(c)(3)’s facilities, personnel, or other financial resources for the benefit of a PAC is impermissible, the prohibition does not stop there. In its CPE Manual, the IRS also noted that ‘‘[a]n IRC 501(c)(3) organization’s resources include intangible assets, such as its goodwill, that may not be used to support the political campaign activities of another organization.’’ Id. at 440. Some leading practitioners have interpreted this provision to prohibit a charity from allowing its name to be used by a PAC, even if the charity provides no financial support or assistance; by allowing a PAC to use its name, the charity implies to its employees and to the public that it endorses the activity of the PAC. See Gregory L. Colvin et al., Commentary on Internal Revenue Service 1993 Exempt Organizations Continuing Professional Education Technical Instruction Program Article on ‘‘Election Year Issues,’’ 11 Exempt Org. Tax Rev. 854, 871 (1995) [hereinafter ‘‘EO Comments’’]. (c) ‘‘Express Advocacy’’ is Not Required, and Issue Advocacy is Prohibited if Used to Convey Support for or Opposition to a Candidate An organization does not need to violate the ‘‘express advocacy’’ standard applied under federal election law for it to violate the po99 For example, section 501(c)(4) and (6) organizations are permitted to establish and/or support PACs. If these exempt organizations provide support for PACs, they are subject to tax, under section 527, on the lesser of their net investment income or their ‘‘exempt function’’ income. 116 litical campaign prohibition of section 501(c)(3).100 T.A.M. 8936002 (May 24, 1989). That is, it is not necessary to advocate the election or defeat of a clearly identified candidate to violate the prohibition. IRS CPE Manual at 412–13. Moreover, an organization may violate the prohibition even if it does not identify a candidate by name. The IRS has stated that ‘‘issue advocacy’’ may serve as ‘‘the opportunity to intervene in a political campaign in a rather surreptitious manner’’ if a label or other coded language is used as a substitute for a reference to identifiable candidates. Id. at 411. The concern is that an IRC 501(c)(3) organization may support or oppose a particular candidate in a political campaign without specifically naming the candidate by using code words to substitute for the candidate’s name in its messages, such as ‘‘conservative,’’ ‘‘liberal,’’ ‘‘pro-life,’’ ‘‘prochoice,’’ ‘‘anti-choice,’’ ‘‘Republican,’’ ‘‘Democrat,’’ etc., coupled with a discussion of the candidacy or the election. When this occurs, it is quite evident what is happening— an intervention is taking place. Id. 411–412. Furthermore: [a] finding of political campaign intervention from the use of coded words is consistent with the concept of ‘‘candidate’’—the words are not tantamount to advocating support for or opposition to an entire political party, such as ‘‘Republican,’’ or a vague and unidentifiably large group of candidates, such as ‘‘conservative’’ because the sender of the message does not intend the recipient to interpret them that way. Code words, in this context, are used with the intent of conjuring favorable or unfavorable images— they have pejorative or commendatory connotations. Id. at 412 n. 6. (d) Educational Activities May Constitute Participation or Intervention As discussed above, the IRS considers activities that satisfy the ‘‘methodology test’’ to be ‘‘educational.’’ Just as educational activities may result in impermissible private benefit, however, so too may they violate the prohibition on political campaign intervention. The IRS takes the position that ‘‘[a]ctivities that meet the methodology test * * * may nevertheless constitute participation or intervention in a political campaign.’’ IRS CPE Manual at 415. New York Bar, 858 F.2d 876 (2d Cir. 1988), referred to above, is the leading case on point. In that case, a bar association published ratings of judicial candidates. The ratings were distributed to bar members and law libraries. The Association also issued press releases regarding its ratings, but did not conduct publicity campaigns to announce its ratings. Id. at 877. The Second Circuit held 100 The FEC’s ‘‘express advocacy’’ standard came into being because the Supreme Court held a provision of the Federal Elections Campaign act relating to contributions ‘‘to reach only funds used for communications that expressly advocate the election or defeat of a clearly identified candidate.’’ See IRS CPE Manual at 412 (quoting Buckley v. Valeo, 424 U.S. 1, 77 (1976)). Examples of ‘‘express advocacy’’ include ‘‘vote for,’’ ‘‘elect,’’ and ‘‘Smith for Congress’’ or ‘‘vote against,’’ ‘‘defeat,’’ and ‘‘reject.’’ Id. at 413 (referring to 11 C.F.R. § 109.1(b)(2)). 117 that although the Association’s publications were educational, the distribution of the publications constituted prohibited campaign intervention. By disseminating the educational publications with the hope that they would ‘‘ ‘ensure’ that candidates whom [the Association] consider[ed] to be ‘legally and professionally unqualified’ ’’ would not be elected, the court held that the Association ‘‘indirectly’’ participated in a political campaign on behalf of or in opposition to a candidate for public office. Id. at 881. An implication of the holding in New York Bar is that one must consider not only whether the activity itself, e.g., publishing educational materials such as candidate ratings, violates the political campaign prohibition, but also whether the intended consequences of the activity violates the prohibition.101 The need to consider the consequences of an otherwise educational activity is clear from a review of several IRS rulings finding that an organization violated the prohibition by disseminating material that was deemed educational, but nonetheless affected voter preferences in violation of the prohibition. For example, in Rev. Rul. 67–71, 1967–1 C.B. 125, the IRS ruled that a 501(c)(3) organization created to improve the public educational system by engaging in campaigns on behalf of candidates for school board was not exempt. Every four years, when the school board was to be elected, the organization considered the qualification of the candidates and selected those it thought most qualified. The organization then ‘‘engage[d] in a campaign on their behalf by publicly announcing its slate of candidates and by publishing and distributing a complete biography of each.’’ Id. Although the selection process ‘‘may have been completely objective and unbiased and was intended primarily to educate and inform the public about the candidates,’’ the IRS nonetheless ruled it to be intervention or participation in a political campaign. Id. In Rev. Rul. 76–456, 1976–2 C.B. 151, the IRS ruled that an organization formed for the purpose of elevating the morals and ethics of political campaigning was nevertheless intervening in a political campaign when it solicited candidates to sign a code of fair campaign practices and released the names of those candidates who signed and those candidates who refused to sign. The IRS stated that this was done to educate citizens about the election process and so that they could ‘‘participate more effectively in their selection of government officials.’’ Id. at 152. Nonetheless, such activity, although educational, ‘‘may result * * * in influencing voter opinion’’ and thus constituted a prohibited participation or intervention in a political campaign. Id. 101 See also T.A.M. 9635003 (Apr. 19, 1996). T.A.M. 9635003 involved a section 501(c)(3) organization that conducted ‘‘citizens’ juries,’’ a form of voter education in which a cross-section of citizens are selected to determine which issues are most relevant in the context of a particular campaign, to hear presentations by candidates on those issues, and to rate the candidates’ positions on the issues. The section 501(c)(3) organization disseminated the citizen jury’s report, including the candidate ratings. In its dissemination, the organization made it clear that it did not support or oppose any candidate, and that the views expressed were those of the citizen jurors and not the organization. The IRS found that the dissemination of the report constituted impermissible participation in a political campaign, and that all expenditures in connection with the conduct of the citizens’ jury—and not just the expenditures of the dissemination—constituted ‘‘political expenditures’’ under section 4955: This culmination shows that all the activity of the organization leading up to the final report is intimately connected with and a part of the process to put on the [citizens’ jury], and thus publication of the final report makes the entire process with respect to the [citizens’ jury] a proscribed political activity. 118 (e) Nonpartisan Activities May Constitute Prohibited Political Campaign Participation The IRS takes the position that the nonpartisan motivation for an organization’s activities is ‘‘irrelevant when determining whether the political campaign prohibition’’ has been violated. IRS CPE Manual at 415. As support for this position, the IRS cites Rev. Rul. 76–456 and New York Bar, both of which are discussed above. In those cases, the court or the IRS found that the activities in question were nonpartisan, but nevertheless held that they constituted participation in a political campaign. As noted by the IRS in its CPE Manual, the court in New York Bar ‘‘made the rather wry observation [that] [a] candidate who receives a ‘not qualified’ rating will derive little comfort from the fact that the rating may have been made in a nonpartisan manner.’’ IRS CPE Manual at 416. Similarly, in G.C.M. 35902 (July 15, 1974), the IRS stated: The provision in the Code prohibiting participation or intervention in ‘‘any political campaign’’ might conceivably be interpreted to refer only to participation or intervention with a partisan motive; but the provision does not say this. It seems more reasonable to construe it as referring to any statements made in direct relation to a political campaign which affect voter acceptance or rejection of a candidate * * * (f) The IRS Has Found Violations of the Prohibition on Political Campaign Participation When an Activity Could Affect or Was Intended to Affect Voters’ Preferences As discussed above, the courts and the IRS have found prohibited political campaign intervention when the activity in question, although educational, affected or could reasonably be expected to affect voter preferences, even where the organization’s motives in undertaking the activity were nonpartisan. G.C.M. 35902 is to similar effect. In that case, the IRS held that a public broadcasting station’s nonpartisan educational motivation was irrelevant in determining whether its provision of free air time to candidates for elective office was permissible under section 501(c)(3). The IRS found that the station’s procedures for providing air time, including an equal time doctrine for all candidates and an on-air disclaimer of support for any particular candidate, were sufficient to ensure that the activity would not constitute an impermissible political campaign intervention. The fact that the station’s motivation was to educate the public and not to influence an election, however, was deemed to be irrelevant. The cases and rulings cited above make it clear that simply having an educational or nonpartisan motive for engaging in prohibited political activity is not a defense to a finding of violation. The relevance and irrelevance of motive is sometimes misstated, however. While the absence of an improper political motivation is irrelevant, evidence showing the existence of a political motivation is relevant and one of the facts and circumstances that the IRS will consider in determining whether there is a violation. Indeed, the IRS has found the existence of evidence showing an intent to participate in a political campaign to be sufficient to support a finding 119 of violation, despite the lack of evidence that the activity achieved the intended results. For example, in G.C.M. 39811 (Feb. 9, 1990), a religious organization encouraged its members to seek election to positions as precinct committee-persons in the Republican or Democratic Party structures. Although none of the organization’s members actually ran for such positions, the IRS found that urging its members to become involved in the local party organizations was part of the organization’s larger plans to ‘‘someday control the political parties.’’ The first step in the Foundation’s long-term strategy was to encourage members to be elected as precinct committeemen. These individuals could then exert influence within the party apparatus, beginning with the county central committee. Precinct committeemen could sway the precinct caucuses, a step in the selection of delegates to the party’s presidential nominating convention. * * * Intervention at this early stage in the elective process in order to influence political parties to nominate such candidates is, we believe, sufficient to constitute intervention in a political campaign. Id. The IRS went on to say: In its discussion of the Tax Court opinion [in New York Bar], the [Second Circuit] observed that the ratings of candidates were ‘‘published with the hope that they will have an impact on the voter.’’ The effort, and not the effect, constituted intervention in a political campaign. Therefore, whether anyone heeded the call to run for precinct committee, whether that individual was elected, and if so, what he or she subsequently did are all immaterial. Id. In G.C.M. 39811, the IRS did not contend that the organization’s urging of members to run for office alone constituted the violation. Rather, the organization’s ‘‘long-term strategy’’ of seeking to influence the political parties’ nomination of candidates by having its members elected to office, and its urging of members to run for office so as to carry out that strategy, were sufficient to support a finding of impermissible campaign participation, despite the fact that the effort was not successful. Other cases and rulings have also looked to an organization’s intent as an important element of a finding of prohibited participation or intervention. In 1972, a court held that an organization violated the participation or intervention prohibition when it ‘‘used its publications and broadcasts to attack candidates and incumbents who were considered too liberal.’’ Christian Echoes National Ministry, Inc. v. United States, 470 F.2d 849, 856 (10th Cir. 1972). The court did not discuss whether the activities actually influenced voters or were reasonably likely to do so. Rather, it concluded that the organization’s ‘‘attempts to elect or defeat certain political leaders reflected [the organization’s] objective to change the composition of the federal government.’’ Id. The IRS also found an organization’s intent relevant in P.L.R. 9117001 (Sept. 5, 1990). As described in that ruling, an organiza- 120 tion mailed out material indicating that it was intending to help educate conservatives on the importance of voting in the 1984 general election. According to facts stated in the ruling letter, the material contained language ‘‘intended’’ to induce conservative voters to vote for President Reagan, even though his name was not included in the materials. The IRS thus concluded that ‘‘the material was targeted to influence a segment of voters to vote for President Reagan.’’ Id. Based on the above, the IRS position is that an organization can violate the political campaign prohibition by either: (a) conducting activities that could have the effect of influencing voter acceptance or rejection of a candidate or group of candidates (the ‘‘effect’’ standard), or (b) engaging in activities that are intended to influence voter acceptance or rejection of a candidate or group of candidates, whether they do so or not (the ‘‘effort’’ standard). Most of the uncertainty over the scope of the prohibition on political campaign intervention relates to the ‘‘effect’’ standard—the possibility that an organization may, without intending to do so, engage in an activity that could have the effect of influencing voter acceptance of a candidate and, as a result, place its tax exemption in jeopardy and/or risk incurring excise tax penalties under section 4955. The legislative history of section 4955 makes it clear that an inadvertent action may indeed violate section 501(c)(3), and suggests that the IRS may appropriately apply the excise tax penalty rather than revocation as a sanction in such situations. Nevertheless, some practitioners have expressed the view that, in interpreting whether ambiguous behavior is violative of the campaign intervention prohibition, primary reliance should be placed on whether there was a political purpose to the behavior at issue. See EO Comments at 856–57. In other words, ‘‘to violate the 501(c)(3) prohibition, the organization’s actions have to include an intentional ’tilt’ for or against one or more people running for public office.’’ Id. at 857. In this regard, it was noted that: In most cases, the presence of a political purpose will be clear from the charity’s paper trail, because organizational activities in the political arena are usually accompanied by assertive behavior, much internal discussion, and explicit written communications. * * * Id. To date, the IRS has shown no intention to abandon its position that an organization may violate the prohibition against political campaign intervention based on the unintended or inadvertent effect of its actions, as well as by an engaging in activities with ‘‘an intentional tilt’’ in favor of a candidate or in support of a PAC. Indeed, its recent election year warning to section 501(c)(3) organizations not to ‘‘become involved in any other activities that may be beneficial or detrimental to any candidate’’ (discussed above) evidences an apparent intention to adhere to a broad interpretation of the prohibition. IRS News Release IR–96–23 (Apr. 24, 1996). 121 (ii) If a Substantial Part of an Organization’s Activities is Attempting to Influence Legislation, or its Primary Goal can only be Accomplished through Legislation, it is an ‘‘Action’’ Organization Section 501(c)(3) provides that an organization cannot be tax-exempt if a ‘‘substantial part’’ of its activities is ‘‘carrying on propaganda, or otherwise attempting, to influence legislation.’’ Although there is virtually no legislative history on the prohibition, courts have declared that the limitations in section 501(c)(3) ‘‘stem from the policy that the United States Treasury should be neutral in political affairs and that substantial activities directed to attempts to influence legislation should not be subsidized.’’ Haswell v. United States, 500 F.2d 1133, 1140 (Ct. Cl. 1974), cert. denied, 419 U.S. 1107 (1975). (The court also noted that ‘‘[t]ax exemptions are matters of legislative grace and taxpayers have the burden of establishing their entitlement to exemptions.’’ Id.) The Regulations provide that an organization is an ‘‘action’’ organization if ‘‘a substantial part of its activities is attempting to influence legislation by propaganda or otherwise.’’ Treas. Reg. § 1.501(c)(3)–1(c)(3)(ii). The Regulations also provide that an organization is an ‘‘action’’ organization if it has the following two characteristics: (a) Its main or primary objective or objectives (as distinguished from its incidental or secondary objective)). To determine whether a substantial part of an organization’s activities is attempting to influence legislation, two alternative tests exist. Each test contains its own definition of ‘‘legislation’’ and what constitutes an attempt to influence legislation. The two tests also contain different ways of determining substantiality. One test is referred to as the ‘‘substantial-part test.’’ The other test, referred to as the ‘‘expenditure test,’’ 102 was added to tax law in 1976 at sections 501(h) and 4911 as a result of uncertainty over the meaning of the word ‘‘substantial.’’ The ‘‘expenditure test’’ sets forth specific, dollar levels of permissible lobbying expenditures. Section 501(h) did not amend section 501(c)(3), but rather provided charitable organizations an alternative to the vague ‘‘substantial-part’’ limitations of section 501(c)(3). A charitable organization may elect the ‘‘expenditure test’’ as a substitute for the substantial-part test. A public charity that does not elect the expenditure test remains subject to the sub102 As stated in the legislative history with respect to I.R.C. § 501(h): ‘‘The language of the lobbying provision was first enacted in 1934. Since that time neither Treasury regulations nor court decisions gave enough detailed meaning to the statutory language to permit most charitable organizations to know approximately where the limits were between what was permitted by the statute and what was forbidden by it. This vagueness was, in large part, a function of the uncertainty in the meaning of the terms ‘substantial part’ and ‘activities’. * * * Many believed that the standards as to the permissible level of activities under prior law was too vague and thereby tended to encourage subjective and selective enforcement.’’ 122 stantial part test. Treas. Reg. § 1.501(h)–1(a)(4). Joint Committee in its General Explanation of the Tax Reform Act of 1976, 1976– 3 C.B. (Vol. 2) 419. The substantial-part test is applied without regard to the provisions of section 501(h). The law, regulations and rulings regarding the expenditure test may not be used to interpret the law, regulations and rulings of the substantial-part test. Section 501(h)(7) (‘‘nothing [in section 501(h)] shall be construed to affect the interpretation of the phrase ‘no substantial part of the activities of which is carrying on propaganda, or otherwise attempting, to influence legislation,’ under [section 501(c)(3)]’’). Determining whether an organization violated the lobbying limitation requires an understanding of what constitutes: i. ‘‘legislation;’’ ii. an attempt to ‘‘influence’’ legislation; and iii. a ‘‘substantial’’ part of an organization’s activities. It is also necessary to understand the circumstances under which an organization’s ‘‘objectives can be achieved only through the passage of legislation.’’ (a) Definition of ‘‘Legislation’’ The Regulations define ‘‘legislation’’ to include ‘‘action by the Congress, by any State legislature, by any local council or similar governing body, or by the public in a referendum, initiative, constitutional amendment, or similar procedure.’’ Treas. Reg. § 501(c)(3)–1(c)(3)(ii). ‘‘Action by the Congress’’ includes the ‘‘introduction, amendment, enactment, defeat, or repeal of Acts, bills, resolutions, or similar items.’’ G.C.M. 39694 (Jan. 22, 1988). This definition does not include Executive Branch actions, or actions of independent agencies. P.L.R. 6205116290A (May 11, 1962). Requesting executive bodies to support or oppose legislation, however, is prohibited. The IRS does not recognize a distinction between ‘‘good’’ legislation and ‘‘bad’’ legislation. For example, in Rev. Rul. 67–293, 1967–2 C.B. 185, the IRS ruled that an organization substantially engaged in promoting legislation to protect animals was not exempt even though the legislation would have benefited the community. (b) Definition of ‘‘attempting to influence legislation’’ Under the Regulations, an organization will be regarded as ‘‘attempting to influence legislation’’ if it: (a) contacts members of a legislative body for the purpose of proposing, supporting, or opposing legislation (Treas. Reg. § 1.501(c)(3)–1(c)(3)(ii)(a)) (referred to as ‘‘direct lobbying’’); (b) urges the public to contact members of a legislative body for the purpose of proposing, supporting, or opposing legislation (id.) (referred to as ‘‘grassroots lobbying’’); or (c) advocates the adoption or rejection of legislation (Treas. Reg. § 1.501(c)(3)–1(c)(3)(ii)(b)). Section 4945(e) of the Internal Revenue Code provides additional guidance regarding the meaning of ‘‘attempting to influence legisla- 123 tion.’’ 103 According to that provision, a taxable expenditure includes any amount paid or incurred for: (a) any attempt to influence any legislation through an attempt to affect the opinion of the general public or any segment thereof, and (b) any attempt to influence legislation through communication with any member or employee of a legislative body, or with any other government official or employee who may participate in the formulation of the legislation (except technical advice or assistance provided to a government body or to a committee or other subdivision thereof in response to a written request by such body or subdivision . * * *) other than through making available the results of nonpartisan analysis, study, or research. Treas. Reg. § 53.4945–2(d)(4), which is applicable to non-electing public charities,104 discusses ‘‘nonpartisan analysis, study, or research’’ as follows: Examinations and discussions of broad social, economic, and similar problems are [not lobbying communications] even if the problems are of the type with which government would be expected to deal ultimately * * * For example, [an organization may discuss].105 Even if specific legislation is not mentioned, however, an indirect campaign to ‘‘mold public opinion’’ may violate the legislative lobbying prohibition. In Christian Echoes National Ministry, Inc. v. United States, 470 F.2d 849 (10th Cir. 1972), the organization in question produced religious radio and television broadcasts, distributed publications, and engaged ‘‘in evangelistic campaigns and meetings for the promotion of the social and spiritual welfare of the community, state and nation.’’ Id. at 852. The court found the publications attempted to influence legislation ‘‘by appeals to the public to react to certain issues.’’ Id. at 855.106 103 I.R.C. §§ 4945(d) and (e) contain definitions of ‘‘attempting to influence legislation’’ with respect to taxable expenditures by private foundations, not public charities. However, ‘‘[a]ctivities which constitute an attempt to influence legislation under Code § 4945 * * * also constitute an attempt to influence legislation under Code § 501(c)(3).’’ G.C.M. 36127 (Jan. 2, 1975). Congress viewed section 4945(e) as a clarification of the phrase ‘‘attempting to influence legislation’’ in tax-exempt law generally, not just with respect to private foundations. Id. 104 See G.C.M. 36127 (Jan. 2, 1975) and Haswell v. United States, 500 F.2d 1133 (Ct. Cl. 1974). 105 See also G.C.M. 36127 (Jan. 2, 1975). 106 For example, the publications urged its readers to: ‘‘write their Congressmen in order to influence the political decisions in Washington;’’ ‘‘work in politics at the precinct level;’’ ‘‘maintain the McCarran-Walter Immigration law;’’ ‘‘reduce the federal payroll by discharging needless jobholders, stop waste of public funds and balance the budget;’’ ‘‘stop federal aid to education, socialized medicine and public housing;’’ ‘‘abolish the federal income tax;’’ and ‘‘withdraw from the United Nations.’’ Christian Echoes National Ministry, 470 F.2d at 855. In light of these facts, the court upheld the IRS position that the organization failed to qualify as a 501(c)(3) organization. 124 Under the expenditure test, ‘‘grassroots lobbying’’ is ‘‘any attempt to influence legislation through an attempt to affect the opinions of the general public or any segment thereof.’’ Treas. Reg. § 56.4911– 2(b)(2)(i). Such a communication will be considered grassroots lobbying if it: (a) refers to specific legislation, (b) reflects a view on such legislation, (c) [e]ncourages the recipient to take action with respect to such legislation. Treas Reg. § 56.4911–2(b)(2)(ii).107 (c) Definition of ‘‘Substantial’’ A bright-line test for determining when a ‘‘substantial’’ part of an organization’s activities are devoted to influencing legislation does not exist. Neither the regulations nor case law provide useful guidance as to whether the determination must be based on activity or expenditures or both. In Seasongood v. Commissioner, 227 F.2d 907 (6th Cir. 1955), the court held that attempts to influence legislation that constituted less than five percent of total activities were not substantial. The percentage test of Seasongood was, however, explicitly rejected in Christian Echoes National Ministry, Inc. The political [i.e. legislative] activities of an organization must be balanced in the context of the objects and circumstances of the organization to determine whether a substantial part of its activities was to influence legislation. (citations omitted.) A percentage test to determine whether the activities were substantial obscures the complexity of balancing the organization’s activities in relation to its objects and circumstances. Id. at 855. Yet in Haswell v. United States, 500 F.2d 1133, 1145 (Ct. Cl. 1974), the court determined that while a percentage test is not the only measure of substantiality, it was a strong indication that the organization’s purposes were no longer consistent with charity. In that case, the court concluded that approximately 20 percent of the organization’s total expenditures were attributable to attempts to influence legislation, and they were found to be substantial. Id. at 1146. The IRS has characterized the ambiguity over the meaning of ‘‘substantial’’ as a ‘‘problem [that] does not lend itself to ready numerical boundaries.’’ G.C.M. 36148 (January 28, 1975). In attempting to give some guidance on the subject, however, the IRS said: [t]he percentage of the budget dedicated to a given activity is only one type of evidence of substantiality. Others are the amount of volunteer time devoted to the activity, the amount of publicity the organization assigns to the activity, and the continuous or intermittent nature of the organization’s attention to it. 107 The IRS has also concluded that an organization formed to ‘‘facilitate’’ the inauguration of a state’s governor-elect and the ‘‘orderly transition of power from one political party to another by legislative and personnel studies’’ violated the prohibition on attempting to influence legislation. G.C.M. 35473 (Sept. 10, 1973). The IRS ‘‘saw no logical way to avoid concluding that [the organization’s] active advocacy of a proposed legislative program requires it to be [classified as an action organization. * * *]’’ See also Rev. Rul. 74–117, 1974–1 C.B. 128. 125 (d) Circumstances under which an organization’s ‘‘objectives can be achieved only through the passage of legislation’’ The Regulations require that when determining whether an organization’s objectives can be achieved only through the passage of legislation that ‘‘all the surrounding facts and circumstances, including the articles and all activities of the organization, are to be considered.’’ Treas. Reg. §1.501(c)(3)–1(c)(3)(iv). There is little additional IRS or court guidance on the subject. In one of the few comments on this section of the Regulations, the IRS said in G.C.M. 33617 (Sep. 12, 1967) that an organization that was ‘‘an active advocate of a political doctrine’’ was an action organization because its objectives could only be attained by legislation. In its publications, the organization stated that its objectives included: the mobilization of public opinion; resisting every attempt by law or the administration of law which widens the breach in the wall of [redacted by IRS] working for repeal of any existing state law which sanctions the granting of public aid to [redacted by IRS]; and uniting all ‘patriotic’ citizens in a concerted effort to prevent the passage of any federal law [redacted by IRS]. * * *’’ By advocating its position to others, thereby attempting to secure general acceptance of its beliefs; by engaging in general legislative activities to implement its views; by urging the enactment or defeat of proposed legislation which was inimical to its principles: the organization ceased to function exclusively in the educator’s role of informant in that its advocacy was not merely to increase the knowledge of the organization’s audience, but was to secure acceptance of, and action on, the organization’s views concerning legislative proposals, thereby encroaching upon the proscribed legislative area. In Rev. Rul. 62–71, 1962–1 C.B. 85, an organization was formed ‘‘for the purpose of supporting an educational program for the stimulation of interest in the study of the science of economics or political economy, particularly with reference to a specified doctrine or theory.’’ It conducted research, made surveys on economic conditions available, moderated discussion groups and published books and pamphlets. The research activities were principally concerned with determining the effect various real estate taxation methods would have on land values with reference to the ‘‘single tax theory of taxation.’’ ‘‘It [was] the announced policy of the organization to promote its philosophy by educational methods as well as by the encouragement of political action.’’ Id. The tax theory advocated in the publications, although educational within the meaning of section 501(c)(3), could be put into effect only by legislative action. Without further elaboration of the facts involved or how the theory could only be put into effect through legislative action, the IRS ruled the organization was an action organization, and thus not operated exclusively for an exempt purpose. In G.C.M. 37247 (Sept. 8, 1977), the IRS discussed whether a organization whose guiding doctrine was to propagate a ‘‘nontheistic, ethical doctrine’’ of volunteerism could be considered a 501(c)(3) or- 126 ganization. The ‘‘ultimate goal’’ of the guiding doctrine was ‘‘freedom from governmental and societal control.’’ According to the IRS: [t]his objective can obviously only be attained legally through legislation, including constitutional amendments, or illegally through revolution. If [the organization] should advocate illegal activities, then it is not charitable; if it advocates legal attainment of its doctrine’s goal through legislation, then it is an action organization. The IRS did not conclude that organization was an action organization, only that there was such a possibility and further investigation was warranted. Research has not uncovered further information about this case. d. To Satisfy the Operational Test, an Organization Must Not Violate the ‘‘Private Inurement’’ Prohibition To qualify for tax-exempt status, section 501(c)(3) provides that an organization must be organized and operated so that ‘‘no part of [its] net earnings * * * inures to the benefit of any private shareholder or individual.’’ The Regulations add little clarification to this provision other than saying that ‘‘[a]n organization is not operated exclusively for one or more exempt purposes if its net earnings inure in whole or in part to the benefit of private shareholders or individuals.’’ Treas. Reg. §1.501(c)(3)–1(c)(2). Although the private benefit and private inurement prohibitions share common and often overlapping elements, the two are distinct requirements which must be independently satisfied. American Campaign Academy, 92 T.C. at 1068. The private inurement prohibition may be ‘‘subsumed’’ within the private benefit analysis, but the reverse is not true. ‘‘[W]hen the Court concludes that no prohibited inurement of earnings exists, it cannot stop there but must inquire further and determine whether a prohibited private benefit is conferred.’’ Id. at 1069. It should be noted that the private inurement prohibition pertains to net earnings of an organization, while the private benefit prohibition can apply to benefits other than those that have monetary value. Furthermore, unlike with the private benefit prohibition, the prohibition on private inurement is absolute. ‘‘There is no de minimis exception to the inurement prohibition.’’ G.C.M. 39862 (Nov. 22, 1991). The IRS has described ‘‘private shareholders or individuals’’ as ‘‘persons who, because of their particular relationship with an organization, have an opportunity to control or influence its activities.’’ Id. ‘‘[I]t is generally accepted that persons other than employees or directors may be in a position to exercise the control over an organization to make that person an insider for inurement purposes.’’ Hill, F. and Kirschten, B., Federal and State Taxation of Exempt Organizations 2–85 (1994). ‘‘The inurement prohibition serves to prevent anyone in a position to do so from siphoning off any of a charity’s income or assets for personal use.’’ G.C.M. 39862 (Nov. 22, 1991). Furthermore, the IRS has stated that: [I]nurement is likely to arise where the financial benefit represents a transfer of the organization’s financial resources to an individual solely by virtue of the individual’s 127 relationship with the organization, and without regard to accomplishing exempt purposes. G.C.M. 38459 (July 31, 1980). Also IRS Exempt Organizations Handbook (IRM 7751) §381.1(4) (‘‘The prohibition of inurement in its simplest terms, means that a private shareholder or individual cannot pocket the organization’s funds except as reasonable payment for goods or services’’); and Hopkins, supra, at 267 (Proscribed private inurement ‘‘involves a transaction or series of transactions, such as unreasonable compensation, unreasonable rental charges, unreasonable borrowing arrangements, or deferred or retained interests in the organization’s assets’’). This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/75143128/CRPT-105hrpt1-Gingrich-Ethics-Report
CC-MAIN-2016-40
refinedweb
55,807
55.74
Robert Millan wrote: > 2011/10/16 Jonathan Nieder <jrnieder@gmail.com>: >> unsigned char sun_len; >> unsigned char sun_family; >> char sun_path[108]; /* Path name. */ > > Is this 108 the actual length? ISTR it was just a placeholder. It's the actual size of the sun_path field of struct sockaddr_un, if that's what you mean. > <sys/un.h> has a macro to determine actual length: > > /* Evaluate to actual length of the `sockaddr_un' structure. */ > # define SUN_LEN(ptr) ((size_t) (((struct sockaddr_un *) 0)->sun_path) \ > + strlen ((ptr)->sun_path)) Ah, thanks for this clarification. glibc's reference manual says: You should compute the "length" parameter for a socket address in the local namespace as the sum of the size of the "sun_family" component and the string length (/not/ the allocation size!) of the file name string. This can be done using the macro "SUN_LEN": In other words, you can call bind() with SUN_LEN(ptr) as the addrlen argument instead of sizeof(struct sockaddr_un) to tell the kernel not to copy so much, and to avoid depending on the magic number chosen in the definition of sockaddr_un. However, the POSIX documentation for bind() gives an example of an AF_UNIX socket with if (bind(sfd, (struct sockaddr *) &my_addr, sizeof(struct sockaddr_un)) == -1) Similarly, the example in the bind(2) manpage from the man-pages project uses sizeof(struct sockaddr_un) as the addrlen argument. Regarding the magic number, POSIX provides some more insight:. [...] >> I wonder whether there would be any downside to changing that 104 in >> the kernel to 108. > > Breaking kernel ABI for everyone not using a patched kernel. Not a good thing. If it's desirable, the change could be made upstream. >> Separately from that, it would be helpful to know where the buffer >> overflowed in #645377 is, since maybe it could be made bigger without >> changing the layout of struct sockaddr_un. > > I don't think this matters, it's an instance of sockaddr_un. If sockaddr_un is part of the ABI as an argument to some function, it would matter. > If you > make sun_path[] bigger in the kernel, then instead of 160 you can > overflow it with a larger value [...] > The kernel-side of things is now doing the right thing AFAICS. It's breaking userspace apps that followed documentation to the letter and worked before. How about this (untested)? diff --git i/sys/kern/uipc_syscalls.c w/sys/kern/uipc_syscalls.c index 3b83e1c..7b4a11e 100644 --- i/sys/kern/uipc_syscalls.c +++ w/sys/kern/uipc_syscalls.c @@ -1703,11 +1703,18 @@ getsockaddr(namp, uaddr, len) if (error) { free(sa, M_SONAME); } else { + const char *p; + size_t datalen; + #if defined(COMPAT_OLDSOCK) && BYTE_ORDER != BIG_ENDIAN if (sa->sa_family == 0 && sa->sa_len < AF_MAX) sa->sa_family = sa->sa_len; #endif sa->sa_len = len; + datalen = len - offsetof(struct sockaddr, sa_data[0]); + p = memchr(sa->sa_data, '\0', datalen); + if (p) + sa_len = p - (const char *)sa; *namp = sa; } return (error);
https://lists.debian.org/debian-bsd/2011/10/msg00201.html
CC-MAIN-2018-13
refinedweb
470
64.91
Restful Restful helps to keep Controllers DRY, removing repetitive code from basic REST actions. Very basic controllers repeat over and over the same code for REST actions, but if you do TDD - and should be doing it -, the code to test these controllers is also repetitive and boring. Restful helps you to get rid of that repetitive and boring code with a very simple defaults for RESTful controllers. But wait, this is not a new idea, there is already Inherited Resources (github.com/josevalim/inherited_resources) gem that allows you to do the same and maybe even more; so why write another library? It's simple, I had the time and I wanted to try it. This library does not cover all the cases that Inherited Resources cover, but it's good enough for so many controllers. Also all the source code is documented and quite simple to follow in case that you want to learn how is it implemented. Installation Resful requires Ruby on Rails 4.0 and Ruby 2.0. To install it, just add the Restful gem to your Gemfile: gem 'restful_controller', require: 'restful' Run bundler command and you are all set. Usage Restful module must be included on each controller that you want to become Restful. Also it's need for these controllers to include the respond_to macro, which will especify the format to which our controllers will respond to. Finally the resful macro is need it to setup our controller. This macro accepts 2 params: Params model: A requires parameter which is a symbol of the model name. route_prefix: A prefix string to be used with controller's url helper. actions: An array of actions that a controller should implement, if none is passed then all seven REST actions are defined. Examples Simple: class DocumentsController < ApplicationController include Restful::Base respond_to :html restful model: :document end This definition will create the seven REST actions for Document model, this setup a single object instance variable @document and a collection variable @documents. Route prefix: class DocumentsController < ApplicationController include Restful::Base respond_to :html restful model: :document, route_prefix: 'admin' end With route_prefix param every URL helper method in our controller will have the defined prefix. `edit_resource_path` helper method will call internally `admin_edit_resource_path`. Listed actions variation: The last parameter actions allows you to list in an array the actions that you want your controller to have: class DocumentsController < ApplicationController include Restful::Base respond_to :html restful model: :document, actions: [:index, :show] end In this case our controller will only respond to those 2 actions. We can do it the other way, indicate list of actions that shouldn't be defined: class DocumentsController < ApplicationController include Restful::Base respond_to :html restful model: :document, actions: [:all, except: [:destroy, :show]] end For this last example all seven REST actions will be defined but :destroy and :show Actions Used as a template for all Restful controller actions By default an action is created with an alias method with a bang(!). If you need to override an action, just redefine it in you controller, to call this base action, just call the bang version: def index @documents = Document.all index! end This will allow you to let Restful to continue with the action flow. When overriding an action, just be sure to have inside de action variables with the name of the defined model, doing this will allow you to call the bang action version: def new @document = Document.new name: 'Default name' new! end For actions like :create and :update a notice or alert can be passed as option to be set in flash object: def create @document = Document.new secure_params create!(notice: 'Hey a new document was created!') end Also a block can be passed for the happy path to tell the application to where redirect: def update @document = Document.find params[:id] @document.update_attributes secure_params update!(notice: 'Document has been updated'){ root_path } end It's also possible to supply a block for the non-happy path, this means proving a dual block for success and failure results from our action: def update @document = Document.find params[:id] @document = update_attributes secure_params update! do |success, failure| success.html { redirect_to root_path } failure.html { render :custom } end end Strong params Restful provides 3 hooks for you to implement in your controller, two of these hooks will be called depending on the action that is being executed: :create_secure_params and :update_secure_params. if you don't require a specific strong params definition for each action, then just implement :secure_params method and this will be called. Rails caching Index and show actions added using Restful makes use of Rails #stale? method, which sets the etag and last_modified response headers that helps client and Rails application to reuse content if this haven't changed. For content that haven't changed, Rails returns a 304 not modified http state and on Rails server we avoid rendering this content and with this Rails do not send data back to the client. The client should reuse its cached version of it. If content have changed then it is rendered and sent back to the client along with the updated headers. api.rubyonrails.org/classes/ActionController/ConditionalGet.html#method-i-stale-3F Helper methods Restful add a set of handy methods that allows it to play nice with Rails view inheritance. These methods are: # Let's assume that our model is Document collection_(path|url) # will return /documents edit_resource_(path|url)(object) # will return /documents/edit/1 new_resource_([ath|url) # will return /documents/new resource_class # will return class definition Document If you want to know how to take advantage of Rails View Inheritance then take a look at this document github.com/mariochavez/restful/wiki/Rails-Template-Inheritance Sample project A sample project can be found here github.com/mariochavez/restful_sample, it shows all about the topics discussed in this Readme file. Bugs and feedback Use issues at Github to report bugs or give feedback. For detailed documentation look at rdoc.info/github/mariochavez/restful/master/frames
http://www.rubydoc.info/github/mariochavez/restful/frames
CC-MAIN-2014-52
refinedweb
998
52.19
This is a very short guide on how to get started with Eigen. It has a dual purpose. It serves as a minimal introduction to the Eigen library for people who want to start coding as soon as possible. You can also read this page as the first part of the Tutorial, which explains the library in more detail; in this case you will continue with The Matrix class. In order to use Eigen, you just need to download and extract Eigen's source code (see the wiki for download instructions). In fact, the header files in the Eigen subdirectory are the only files required to compile programs using Eigen. The header files are the same for all platforms. It is not necessary to use CMake or install anything. Here is a rather simple program to get you started. We will explain the program after telling you how to compile it. There is no library to link to. The only thing that you need to keep in mind when compiling the above program is that the compiler must be able to find the Eigen header files. The directory in which you placed Eigen's source code must be in the include path. With GCC you use the -I option to achieve this, so you can compile the program with a command like this: On Linux or Mac OS X, another option is to symlink or copy the Eigen folder into /usr/local/include/. This way, you can compile the program with: When you run the program, it produces the following output: The Eigen header files define many types, but for simple applications it may be enough to use only the MatrixXd type. This represents a matrix of arbitrary size (hence the X in MatrixXd), in which every entry is a double (hence the d in MatrixXd). See the quick reference guide for an overview of the different types you can use to represent a matrix. The Eigen/Dense header file defines all member functions for the MatrixXd type and related types (see also the table of header files). All classes and functions defined in this header file (and other Eigen header files) are in the Eigen namespace. The first line of the main function declares a variable of type MatrixXd and specifies that it is a matrix with 2 rows and 2 columns (the entries are not initialized). The statement m(0,0) = 3 sets the entry in the top-left corner to 3. You need to use round parentheses to refer to entries in the matrix. As usual in computer science, the index of the first index is 0, as opposed to the convention in mathematics that the first index is 1. The following three statements sets the other three entries. The final line outputs the matrix m to the standard output stream. Here is another example, which combines matrices with vectors. Concentrate on the left-hand program for now; we will talk about the right-hand program later. The output is as follows: The second example starts by declaring a 3-by-3 matrix m which is initialized using the Random() method with random values between -1 and 1. The next line applies a linear mapping such that the values are between 10 and 110. The function call MatrixXd::Constant(3,3,1.2) returns a 3-by-3 matrix expression having all coefficients equal to 1.2. The rest is standard arithmetics. The next line of the main function introduces a new type: VectorXd. This represents a (column) vector of arbitrary size. Here, the vector v is created to contain 3 coefficients which are left unitialized. The one but last line uses the so-called comma-initializer, explained in Advanced initialization, to set all coefficients of the vector v to be as follows: The final line of the program multiplies the matrix m with the vector v and outputs the result. Now look back at the second example program. We presented two versions of it. In the version in the left column, the matrix is of type MatrixXd which represents matrices of arbitrary size. The version in the right column is similar, except that the matrix is of type Matrix3d, which represents matrices of a fixed size (here 3-by-3). Because the type already encodes the size of the matrix, it is not necessary to specify the size in the constructor; compare MatrixXd m(3,3) with Matrix3d m. Similarly, we have VectorXd on the left (arbitrary size) versus Vector3d on the right (fixed size). Note that here the coefficients of vector v are directly set in the constructor, though the same syntax of the left example could be used too. The use of fixed-size matrices and vectors has two advantages. The compiler emits better (faster) code because it knows the size of the matrices and vectors. Specifying the size in the type also allows for more rigorous checking at compile-time. For instance, the compiler will complain if you try to multiply a Matrix4d (a 4-by-4 matrix) with a Vector3d (a vector of size 3). However, the use of many types increases compilation time and the size of the executable. The size of the matrix may also not be known at compile-time. A rule of thumb is to use fixed-size matrices for size 4-by-4 and smaller. It's worth taking the time to read the long tutorial. However if you think you don't need it, you can directly use the classes documentation and our Quick reference guide.
http://eigen.tuxfamily.org/dox/GettingStarted.html
CC-MAIN-2014-15
refinedweb
930
70.94
This page contains monthly reports to the ASF Board of Directors from incubator projects that have to report this month. You can also see the overall quarter report for 1st Quarter 2006.. Nathan Mittler was voted in as committer due to his excellent work on the C++ client for ActiveMQ. The Java code base has has being going through QA and stabilization for the past few months and several release candidates for 4.0 have been cut. The 4.0 final release is currently under Vote and we expect to have a final release very shortly. Development is starting to gear up now for the 4.1 release and we are excited to find out where the community and committers drive the development of the next 4.1 release. The website home page has now been sorted out and is being checked into svn. The static HTML is being generated from a Confluence wiki and content is very easy to update now. See: ADF Faces Mailing lists, subversion repository, and JIRA setup was completed, along with accounts for two new committers to the podling. Initial code drop committed to repository, starting to work through package name adjustments and code review. Cayenne Initial committer accounts were created and the Jira migration was completed. Migration of SourceForge CVS to Apache SVN was completed. New modules developed since the Incubator proposal was submitted (namely JPA module) are switched to the new package naming: org.apache.cayenne. The core modules still use org.objectstyle.cayenne naming. Kabuki Working on project documentation and logistics for using toolkit hosted at Apache. Lokahi Not much going on this month, after a fairly active month in April. Not issues that need the Incubator PMC or Board attention at this time. Hoping to get more done in the next month or two... Lucene.Net The main issue at the moment continues to be the gathering of iCLAs. We're making progress: Total number of contributors contacted: 104 Number of contributors with iCLA already filed at ASF: 41 Number of contributors that have already sent their iCLA, but still not filed at ASF: 18 Number of contributors who have not yet sent their iCLA: 45 We will begin discussing alternative strategies, such as clean-room re-implementations, for code whose contributors have not sent in their CLAs yet. We've made some significant progress in replacing LGPL jars, mainly JOTM, the default transaction manager of OFBiz. We've integrated the Geronimo/Jencks transaction manager (TM) into OFBiz, and early testing shows it's working fine. We continue testing and gathering feedback, and anticipate that we will be able to remove the JOTM jars soon. At the same time, we will be able to remove some other, less critical, LGPLed jars. Once that's done, we plan to move the OFBiz codebase to the Incubator SVN server A new committer has been added to the project (Jacques Le Roux): he is a longtime contributor and is doing some interesting implementation for the POS (Point of Sale) component, so the core group of initial committers has (succesfully) voted to grant him commit rights to the POS component. ServiceMix The STATUS file for the project is up to date. The code is clean and using only Apache-compliant libraries, it has the correct copyright notices and is in the org.apache.servicemix namespace. We've got all the software grants sorted and all developers have their CLAs on file and accounts created. The project has very active mailing lists. The code base is going through a stabilization phase. The community is actively working on cutting Servicemix 3.0 M1 release candidates. Hopefully once of those release candidates will make it to a votes and pass soon. The website home page has now been sorted out and is being checked into svn. The static HTML is being generated from a Confluence wiki and content is very easy to update now. See: stdcxx Stdcxx status report for the calendar quarter ending in 4/2006: This is the third quarterly report for stdcxx. Since the last report the stdcxx community has added a new committer, Anton Pevtsov, and with his help successfully completed the release of version 4.1.3 of the project. Since the relase the team has made significant progress migrating the Rogue Wave C++ Standard Library test harness and test suite to the Apache stdcxx test driver. The stdcxx community also continues to increase the visibility of the project with the goal of further raising the number as well as diversity of its users, contributors, and committers. The plan for the next three months is to continue to work on migrating the test suite and, in parallel, to enhance the support for the C++ Standard Library extensions described in the Technical Report on C++ Library Extensions and recently voted into the working paper of the (next) C++ Standard expected to be released by 2009. Martin Sebor Synapse After the M1 release in December, Synapse had a slow start to the year. However, recently the project has restarted in full vigor with some key design improvements to make things cleaner and easier to develop. The focus on the XML config model has been reduced with the Synapse system being run on an object model config which can be filled using the XML config language or directly (or from a database or whatever). We're now working towards an M2 release ASAP to get wider feedback on the core design and architecture and if that goes well another release by ApacheCon to get closer to the desired function for the system. Community development is also making slow but steady progress. The diversity has improved and we've recently added a new committer as well. If all goes well and if the community development continues, Synapse should be ready for graduation soon - probably before ApacheCon. TSIK Tuscany On the Java side, Community and code development are continuing. We have added two new committers (Ant Elder and Daniel Kulp) since the last report and have many other people active on the mailing lists. We have been working toward a Java release which is intended to allow others to easily contribute plugins and other functionality in order to further expand the community. A vote on a candidate started on the development list (as of 5/18/06) and if passed we will be asking the Incubator PMC for permission to release it. After a slower start development is now also ramping up on the C++ codebase but additional effort will be needed to attract new community members. WebWork 2 We resolved the remaining IP issues that included LGPL Javascript dependencies, invalid copyrights (MyCorp, Inc), and code grants for valid copyrights. The Struts PMC then successfully voted to accept the WebWork 2 podling and the Incubator PMC vote passed as well. The infrastructure and website-related graduation tasks have been completed and we are now fully graduated. Project just started incubation. CLAs are on file for the initial committers. A CCLA has been received for the initial contributions of Java ORB implementation and the code has been imported into SVN repository. Mailing lists, subversion repository, and JIRA setup was completed. Activity is starting on the development mailing list. Currently dev team started to discuss what would be a good first milestone release and its definition.
http://wiki.apache.org/incubator/May2006
crawl-001
refinedweb
1,226
52.49
. From any node in the cluster, update the global device namespace. You might have a volume management daemon such as vold running on your node, and have a CD-ROM drive connected to the node. Under these conditions, a device busy error might be returned even if no disk is in the drive. This error is expected behavior. You can safely ignore this error message. To manage this volume with volume management software, use the appropriate Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager commands to update the list of devices on all nodes that are attached to the new volume that you created. For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation. To configure a.
http://docs.oracle.com/cd/E19199-01/819-6676/6n8m3tm9c/index.html
CC-MAIN-2015-35
refinedweb
123
56.25
Text Fields Clone the code or follow along in the online editor. We are about to create a simple app that reverses the contents of a text field. Again this is a pretty short program, so I have included the whole thing here. Skim through to get an idea of how everything fits together. Right after that we will get into the details! import Browser import Html exposing (Html, Attribute, div, input, text) import Html.Attributes exposing (..) import Html.Events exposing (onInput) -- MAIN main = Browser.sandbox { init = init, update = update, view = view } -- MODEL type alias Model = { content : String } init : Model init = { content = "" } -- UPDATE type Msg = Change String update : Msg -> Model -> Model update msg model = case msg of Change newContent -> { model | content = newContent } -- VIEW view : Model -> Html Msg view model = div [] [ input [ placeholder "Text to reverse", value model.content, onInput Change ] [] , div [] [ text (String.reverse model.content) ] ] This code is a slight variant of the counter from the previous section. You set up a model. You define some messages. You say how to update. You make your view. The difference is just in how we filled this skeleton in. Let's walk through that! As always, you start by guessing at what your Model should be. In our case, we know we are going to have to keep track of whatever the user has typed into the text field. We need that information so we know how to render the reversed text. type alias Model = { content : String } This time I chose to represent the model as a record. (You can read more about records here and here.) For now, the record stores the user input in the content field. Note: You may be wondering, why bother having a record if it only holds one entry? Couldn't you just use the string directly? Sure! But starting with a record makes it easy to add more fields as our app gets more complicated. When the time comes where we want two text inputs, we will have to do much less fiddling around. Okay, so we have our model. Now in this app there is only one kind of message really. The user can change the contents of the text field. type Msg = Change String This means our update function just has to handle this one case: update : Msg -> Model -> Model update msg model = case msg of Change newContent -> { model | content = newContent } When we receive new content, we use the record update syntax to update the contents of content. Finally we need to say how to view our application: view : Model -> Html Msg view model = div [] [ input [ placeholder "Text to reverse", value model.content, onInput Change ] [] , div [] [ text (String.reverse model.content) ] ] We create a <div> with two children. The interesting child is the <input> node. In addition to the placeholder and value attributes, it uses onInput to declare what messages should be sent when the user types into this input. This onInput function is kind of interesting. It takes one argument, in this case the Change function which was created when we declared the Msg type: Change : String -> Msg This function is used to tag whatever is currently in the text field. So let's say the text field currently holds glad and the user types e. This triggers an input event, so we will get the message Change "glade" in our update function. So now we have a simple text field that can reverse user input. Neat! Now on to putting a bunch of text fields together into a more traditional form.
https://guide.elm-lang.org/architecture/text_fields.html
CC-MAIN-2019-30
refinedweb
591
74.08
More often than not, your PlotDevice scripts will involve at least a bit of mathematics. Things like fluid motion, orbital behavior, and easing-in and -out all require a little number-crunching. But fear not, this chapter collects some of the more-useful math tools for dealing with 2D drawing. When you draw primitives to the PlotDevice canvas, their position is specified relative to the ‘origin’ in the upper left corner. By changing the ‘transformation state’, you can modifiy the location of the origin point which will affect how all subsequent pairs of coordinates are translated into canvas positions. Perhaps it’s easiest to think of transformation state in terms of a real-world pen & paper analogy. When you move the pen to a pair of coordinates, your arm moves a relative distance from its starting point. When you change transformation state, you’re picking up the pen and shifting the paper before making the same arm movements as before. To get a sense of how transformations interact with drawing commands, let’s look at the translate() command. It allows you to shift the canvas horizontally and vertically (but without changing its orientation or size). For instance, when you say: rect(20, 40, 80, 80) a rectangle with a width and a height of 80 will be positioned 20 to the right and 40 down from the top left corner of the canvas. We’ll refer to its coordinates as (20,40) for short. But when you say: translate(100, 100) rect(20, 40, 80, 80) the origin point is no longer at the top left (0,0) since we translated it to (100,100). As a result, the rect() created in the second line will be drawn 20 to the right and 40 down of from the new origin point. Thus the rectangle’s final resting place will be (120,140). The translate() command shifts the origin relative to the current origin point. Note how calling it repeatedly causes the rectangle – drawn at (20,40) in each instance – to be positioned further and further to the lower right: The rotate(), scale(), and skew() commands also work incrementally based on the ‘current’ transformation state. If you first rotate by 40°, all the elements you subsequently draw will be rotated by 40°. If you then rotate by 30°, the current rotation becomes the sum of the two rotations: 40°+30° = 70°. Likewise, if you keep calling scale(0.8) in your script the elements on the canvas become smaller and smaller. The second time you call it, the current scale becomes 0.64 (0.8 × 0.8), the third time it becomes 0.512 (0.64 × 0.8), and so on. The reset() command undoes any transformation changes from earlier in the script and repositions the origin at the canvas’s top left corner. We haven’t discussed the transform() command yet. As you may have already read in the reference you can switch it between two ‘modes’: CENTER and CORNER. Centered transformation means that all shapes, paths, text and images will rotate, scale, and skew around their own center (as we would probably expect them to). Using corner-mode transformations means that elements transform around the current origin point. This can be a difficult concept to grasp. In the example below, we move the origin point to (100,100) and have three pieces of text rotate around it. Without the corner mode transform, they would rotate around their own center and it would be a lot more difficult to position them. Since the corner mode rotation is relative to the origin, we can draw each piece of text at the same relative coordinates (15, 0): In addition to allowing you to change the mode, the transform() command has special ‘clean up’ behavior when used as part of a with block. It allows you to safely create state-in-a-state. Inside the indentated block, any translate(), rotate(), scale() and skew() commands are valid only until the end of the block. Then the transformation state reverts to how things were prior to the with-statement. This way you can transform groups of elements that need to stay together. Here’s a short example. Notice that the last rectangle isn’t rotated? That’s because the rotate() call happens within the transform() block and is reset after drawing the second rectangle. rect(20, 20, 40, 40) with transform(): rotate(45) rect(120, 20, 40, 40) rect(220, 20, 40, 40) You can think of these nested states as analogous to the orbits of planets and moons in our solar system. The planets orbit a central origin point, the Sun, while the moons orbit around their respective planets. You can think of a transform() block as a shift in perspective – you’re setting a new ‘local’ origin point centered not on the Sun but on a particular planet. The planet’s moons orbit this new, local origin (wherever this is), blissfully unaware of the massive star at the center of it all. size(450, 450) speed(30) def draw(): stroke(0) transform(CORNER) # This is the starting origin point (the heart of the Sun) translate(225, 225) arc(0,0, 5) text("sun", 10,0) for i in range(3): # Each planet acts as a local origin for the orbiting moon. # Comment out the transform() statement and see what happens. with transform(): # This is a line with a length of 120, # that starts at the sun and has an angle of i * 120. rotate(FRAME+i*120) line(0,0, 120,0) # Move the origin to the end of the line. translate(120, 0) arc(0,0, 5) text("planet", 10, 0) # Keep rotating around the local planet. rotate(FRAME*6) line(0, 0, 30, 0) text("moon", 32, 0) # The origin moves back to the sun at the end of the block. PlotDevice scripts are rife with x/y coordinate pairs. You use coordinates to place primitives on the canvas, add line segments to a Bezier, and so forth. Most of the time you can choose these locations through intuition or trial and error. But occasionally you’ll want to perform calculations on points to assemble your compositions in a relative manner. Sometimes you’ll be working with a pair of points and want to know the angle or distance between them. In other cases you may have a single point and need to calculate a second position relative to the first. To aid in these kinds of tasks, PlotDevice provides a simple class called Point. You can create a Point by passing a pair of coordinate values to the constructor function and access its coordinates through the x and y attributes. pt = Point(13,37) print pt.y >>> 37 You can unpack it back into a pair of values through simple assignment: x, y = pt print x, y >>> 13 37 Point objects support basic arithmetic operators (as does the similar Size object): pt = Point(10, 20) print pt * 2 # Point(20, 40) print pt / 2 # Point(5, 10) print pt + 5 # Point(15, 25) print pt + (40, 30) # Point(50, 50) pt = Point(10, 20) sz = Size(200,200) print pt + sz # Point(210, 220) print pt + sz/2 # Point(110, 120) Once you have a Point, you can ‘ask’ it all sorts of questions about how it relates to other points with a few useful (and speed-optimized) math methods. You can pull off some neat graphical tricks by combining the Point methods in your scripts. Finding the angle from a central origin to a randomly positioned point: r = 2.0 origin = Point(WIDTH/2, HEIGHT/2) for i in range(5): pt = Point(random(WIDTH), random(HEIGHT)) arc(pt, r) a = origin.angle(pt) with transform(CORNER): translate(origin.x, origin.y) rotate(-a) arrow(30, 0, 10) Orbiting around an origin point: r = 2.0 origin = Point(WIDTH/2, HEIGHT/2) arc(origin, r) # a.k.a. arc(origin.x, origin.y, r) for i in range(10): a = 36*i pt = origin.coordinates(85, a) arc(pt, r) line(origin, pt) Drawing perpendicular lines around a circular path: stroke(0.5) and nofill() path = oval(45, 45, 105, 105) for t in range(50): curve = path.point(t/50.0) a = curve.angle(curve.ctrl2) with transform(CORNER): translate(curve.x, curve.y) rotate(-a+90) # rotate by 90° line(0, 0, 35, 0) As you may remember from trigonometry, there are two commonly used systems for expressing the size of angles. In day-to-day life we’re most likely to encounter ‘degrees’ ranging from 0 to 360. In mathematics it’s more traditional to use ‘radians’ ranging from 0 to 2π. PlotDevice recognizes that different situations may call for different angle units and provides some utilities to make switching between them easy. A number of commands take an angle as one of the arguments. By default, you’re expected to use degrees. But by calling the geometry() command with DEGREES, RADIANS, or PERCENT you can switch between modes on the fly. Any subsequent call to a drawing or transformation command that deals with angles will then deal in the newly-specified units. The angle() and coordinates() methods discussed above also obey the geometry() setting – both for interpreting arguments and providing return values. with stroke(0), nofill(): arc(50,25, 25, range=180) geometry(RADIANS) arc(50,75, 25, range=pi) geometry(PERCENT) arc(50,125, 25, range=.5) To make working with radians more convenient, PlotDevice provides a pair of global constants – pi and tau – representing a half- and full-circle respectively. Sometimes you want to set the position or size of objects in such a way that they interrelate to each other, creating a kind of ‘harmony’ between them. For example, sine waves are great for coordinating motion since they ease in and out. Another interesting proportional principle is the so-called golden ratio or ‘3-5-8 rule’. It has been around in aesthetics for millennia, though its longevity seems to have as much to do with groupthink as anything ‘fundamental’ about the proportion. For our purposes, the great thing about it is that it can be expressed as a mathematical series – the: w1, w2 = goldenratio(260) h1, h2 = goldenratio(260) b1, b2 = goldenratio(1.0) b3, b4 = goldenratio(b1) fill(0, b1/2, b1) rect(0, 0, w1, h1) fill(0, b2/2, b2) rect(w1, 0, w2, h1) fill(0, b4/2, b4) rect(0, h1, w1+w2, h2) x, y = 0, 0 w, h = 260, 260 th = h # top height bh = 0 # bottom height for i in range(10): th, bh = goldenratio(th) v = float(th)/w + 0.3 fill(0, v/2, v) rect(x, y, w, th) y += th th = bh A sine calculates the vertical distance × 0.5 or 35. So b’s x equals a’s x plus 35. The command in PlotDevice would look like this: def coordinates(x0, y0, distance, angle): from math import radians, sin, cos angle = radians(angle) x1 = x0 + cos(angle) * distance y1 = y0 + sin(angle) * distance return x1, y1
https://plotdevice.io/tut/Geometry
CC-MAIN-2017-39
refinedweb
1,859
61.46
Provided by: liballegro-doc_4.2.2-3_all NAME set_clip_rect - Sets the clipping rectangle of a bitmap. Allegro game programming library. SYNOPSIS #include <allegro.h> void set_clip_rect(BITMAP *bitmap, int x1, int y1, int x2, int y2); DESCRIPTION Each bitmap has an associated clipping rectangle, which is the area of the image that it is OK to draw onto. Nothing will be drawn to positions outside this space. This function sets the clipping rectangle for the specified bitmap. Pass the coordinates of the top-left and bottom-right corners of the clipping rectangle in this order; these are both inclusive, i.e. set_clip_rect(bitmap, 16, 16, 32, 32) will allow drawing to (16, 16) and (32, 32), but not to (15, 15) and (33, 33).(3alleg), add_clip_rect(3alleg), set_clip_state(3alleg), get_clip_state(3alleg), ex12bit(3alleg), excamera(3alleg)
http://manpages.ubuntu.com/manpages/precise/man3/set_clip_rect.3alleg.html
CC-MAIN-2019-43
refinedweb
135
59.4
CircuitPython library for SD cards. Project description Introduction CircuitPython driver for SD cards. This implements the basic reading and writing block functionality needed to mount an SD card using storage.VfsFat. Dependencies This driver depends on: Please ensure all dependencies are available on the CircuitPython filesystem. This is easily achieved by downloading the Adafruit library and driver bundle. Usage Example Mounting a filesystem on an SD card so that its available through the normal Python ways is easy. Below is an example for the Feather M0 Adalogger. Most of this will stay the same across different boards with the exception of the pins for the SPI and chip select (cs) connections. import adafruit_sdcard import busio import digitalio import board import storage # Connect to the card and mount the filesystem. spi = busio.SPI(board.SCK, board.MOSI, board.MISO) cs = digitalio.DigitalInOut(board.SD_CS) sdcard = adafruit_sdcard.SDCard(spi, cs) vfs = storage.VfsFat(sdcard) storage.mount(vfs, "/sd") # Use the filesystem as normal. with open("/sd/test.txt", "w") as f: f.write("Hello world\n")-sd -.
https://pypi.org/project/adafruit-circuitpython-sd/
CC-MAIN-2019-13
refinedweb
175
51.95
scratcher Scratch card widget which temporarily hides content from user. Features - Android and iOS support - Cover content with full color or custom image - Track the scratch progress and threshold - Fully configurable Getting started - First thing you need to do is adding the scratcher as a project dependency in pubspec.yaml: dependencies: scratcher: "^1.3.0" - Now you can install it by running flutter pub getor through code editor. Setting up - Import the library: import 'package:scratcher/scratcher.dart'; - Cover desired widget with the scratch card: Scratcher( brushSize: 30, threshold: 50, color: Colors.red, onChange: (value) { print("Scratch progress: $value%"); }, onThreshold: () { print("Threshold reached, you won!"); }, child: Container( height: 300, width: 300, color: Colors.blue, ), ) Properties Programmatic access You can control the Scratcher programmatically by assigning the GlobalKey to the widget. final scratchKey = GlobalKey<ScratcherState>(); Scratcher( key: scratchKey, // remaining properties ) After assigning the key, you can call any exposed methods e.g.: RaisedButton( child: const Text('Reset'), onPressed: () { scratchKey.currentState.reset(duration: Duration(milliseconds: 2000)); }, ); Example Project There is a crazy example project in the example folder. Check it out to see most of the available options. Resources - by The CS Guy
https://pub.dev/documentation/scratcher/latest/
CC-MAIN-2020-29
refinedweb
191
50.43
I have a requirement to write derived data to specified worksheets within a .xslx file. There is a DSS Plugin that can output multiple datasets to a spreadsheet. It seems to create a new, single spreadsheet, where each dataset gets written to a worksheet within the .xslx. That doesn't solve my requirement, where, for example, I have an existing spreadsheet containing worksheets named A, B, and C, and I need to write my dataset to B. I'm sure it would be possible to create a code recipe to do what I want, but I'd prefer a visual approach. By way of comparison, Alteryx's Output Data tool has the functionality I'm looking for. I'd welcome any suggestions on a DSS solution. Thanks in advance. Hi @alundavid. I think you might have two options here: "Adapt" means that from the plugin section in DSS you can convert the installed plugin to a "dev plugin" that can be edited to match your needs. Cheers. Welcome to the Dataiku Community. You might find this multi-sheet excel plugin to be of interest. Plugins are additional code that can be added to Dataiku DSS to do a variety of nifty additional features. Thanks Tom. That is the plugin I mentioned in my question, which doesn't do what I need (whereas the Alteryx tool does). It is interesting to look at the code of the plugin, and I can see where perhaps it might be enhanced to support my requirement, i.e. by adapting this function: def dataframes_to_xlsx(input_dataframes_names, xlsx_abs_path, dataframe_provider): """ Write the input datasets into same excel into the folder :param input_datasets_names: :param writer: :return: """ logger.info("Writing output xlsx file ...") writer = pd.ExcelWriter(xlsx_abs_path, engine='openpyxl') for name in input_dataframes_names: df = dataframe_provider(name) logger.info("Writing dataset into excel sheet...") df.to_excel(writer, sheet_name=name, index=False, encoding='utf-8') logger.info("Finished writing dataset {} into excel sheet.".format(name)) writer.save() logger.info("Done writing output xlsx file") It may be that I have to write a code recipe, but it seems a shame that I need to do that when Alteryx makes this process step effortless. Hi @alundavid. I think you might have two options here: "Adapt" means that from the plugin section in DSS you can convert the installed plugin to a "dev plugin" that can be edited to match your needs. Cheers.
https://community.dataiku.com/t5/Plugins-Extending-Dataiku-DSS/Write-to-a-worksheet-within-an-Excel-xslx-spreadsheet/m-p/14271/highlight/true
CC-MAIN-2021-17
refinedweb
399
62.98
How about this? private[packagename] trait InternalUtils extends Utils class Foo extends InternalUtils Please, let’s not discuss syntax until we’re done with the semantics. I’d like to keep this discussion focused and technical, for now. One of the benefits (from a programming perspective) of using AnyVal for wrapper types is that the code is very compact. For example, suppose one is wrapping a user ID: AnyVal case class UID(value: String) extends AnyVal That single line succinctly gives you an apply(String) method to wrap values, and a value method to unwrap them. The equivalent code for opaque types is substantially more verbose. apply(String) value opaque type UID = String object UID { def apply(s: String): UID = s implicit class Ops(val self: UID) extends AnyVal { def value: String = self } } If creating several wrapper types (e.g. for half a dozen types in a user record), opaque types add significant source code burden. Is this use case sufficiently important to warrant its own syntax/syntax extension (for example, opaque case type Foo = Bar)? (not trying to start a discussion about possible syntax; just want to discuss the possibility of having some syntax) opaque case type Foo = Bar (EDIT: I had misunderstood the visibility rules that value classes are allowed to use, so these examples will work with value classes, modulo some potential boxing in some situations. See ghik’s reply.) One interesting point here is that the design of value classes is such that you are required to have unconditional public wrappers and unwrappers. (This is because the internal implementation’s extension$ methods require third parties to be able to wrap/unwrap the types.) extension$ By contrast, the proposal here gives the user control of when (or if) wrapping and unwrapping is possible. Consider cases where we only want to allow certain values to be wrapped: opaque type PositiveLong = Long object PositiveLong { def apply(n: Long): Option[PositiveLong] = if (n > 0L) Some(n) else None implicit class Ops(val self: PositiveLong) extends AnyVal { def asLong: Long = self } } Relatedly, we might choose to use Int to encode an enumeration or flags, but want to ensure users can only use a small selection of actual values (to prevent users from wrapping arbitrary values): Int opaque type Mode = Int object Mode { val NoAccess: Mode = 0 val Read: Mode = 1 val Write: Mode = 2 val ReadWrite: Mode = 3 implicit class Ops(val self: Mode) extends AnyVal { def isReadable: Boolean = (self & 1) == 1 def isWritable: Boolean = (self & 2) == 2 def |(that: Mode): Mode = self | that def &(that: Mode): Mode = self & that } } Finally, we might not want users to be able to unconditionally extract back to the underlying values. In this case, we can restrict access to code in the db package: db package object db { opaque type UserId = Long object UserId { def apply(n: Long): UserId = n private[db] def unwrap(u: UserId): Long = u } def lookupUser(db: DB, u: UserId): Option[User] = ... } These are all interesting use cases that are not possible (except by convention) with value classes. I agree that the enrichment is a little bit cumbersome (which is why the first version of our proposal included it) but on balance I think the added flexiblity and power of opaque types is worth a bit of verbosity for enrichment. In the future, if we improve the story with value classes and extension methods, opaque types will be able to reap the benefits. What do you mean? I thought value classes can have private constructor and wrapped member, e.g. class Mode private(private val raw: Int) extends AnyVal object Mode { val NoAccess = new Mode(0) val Read = new Mode(1) val Write = new Mode(2) val ReadWrite = new Mode(3) def apply(raw: Int): Option[Mode] = if(raw >= 0 && raw <= 3) Some(new Mode(raw)) else None } Note in your example, these is no way to make an unboxed Mode. To return Option[Mode] you must box, no? Option[Mode] I could have also done this: def apply(raw: Int): Mode = { require(raw >= 0 && raw <= 3) new Mode(raw) } which incurs no boxing, but is less typesafe. I am totally in favour of the opaque type proposal and I fully understand its superiority to value classes in terms of performance. I simply didn’t understand @non’s argument about “unconditional public wrappers” in value classes. I am very much behind opaque types, and I don’t actually think they add too much verbosity in most situations. However, for basic wrappers, they do. Let me motivate my concern with the following example: case class BrittleUser(id: Long, firstName: String, lastName: String, email: String) case class User(id: User.Id, firstName: User.FirstName, lastName: User.LastName, email: User.Email) object User { case class Id(value: Long) extends AnyVal case class FirstName(value: String) extends AnyVal case class LastName(value: String) extends AnyVal case class Email(value: String) extends AnyVal } By using a few short wrapper types, you get type safety, preventing you from getting the order of the fields wrong. However, to accomplish the same with opaque types is… a little bit ridiculous. I like the idea of opaque types, and I think they add flexibility at extremely low cost for types which are more than a simple wrapper (such as your Mode type). However, in some situations, they lose to AnyVals in source maintainability even though they win in performance. Mode My thoughts on that SIP: I really like the parallel of "class => actual reified/allocatable thing, type => compile-type thing with no runtime representation" class type AnyVal boxing unpredictably is a huge downside. Good performance is good, bad performance is meh, but unpredictable performance is the absolute worst. “Badly” behaved isInstanceOf/asInstanceOf is much less of a problem than people think. isInstanceOf asInstanceOf Those two methods are badly behaved in Scala.js, at least compared to Scala-JVM: (1: Int).isInstanceOf[Double] == true, anyThingAtAll.asInstanceOf[SomeJsTrait] never fails, etc. Of the things people get confused with about Scala.js, this doesn’t turn up that often. (1: Int).isInstanceOf[Double] == true anyThingAtAll.asInstanceOf[SomeJsTrait] Scala-JVM isInstanceOf/asInstanceOf are already a bit sloppy due to generic type-erasure, e.g. List[Int]().isInstanceOf[List[String]] == true. Opaque wrapper types would simply be extending the type-erasure to non-generic contexts List[Int]().isInstanceOf[List[String]] == true Being able to say "myStringConstant".asInstanceOf[OpaqueStringConstantType] is widely used in Scala.js by “normal” users e.g. for wrapping third-party library constant/enum/js-dictionary-like things in something more type-safe. It’s actually very, very convenient, and empirically the odd behavior of isInstanceOf/asInstanceOf when you make mistakes in such cases just doesn’t seem to cause much confusion for people. "myStringConstant".asInstanceOf[OpaqueStringConstantType] In a similar vein, I would be happy for Array[MyOpaqueTypeWrappingDouble]().isInstanceOf[Array[Double]] == true. I want the predictable performance more than I want the predictable isInstanceOf behavior: if I wanted the other way round, I would use AnyVals or just normal boxes instead! Array[MyOpaqueTypeWrappingDouble]().isInstanceOf[Array[Double]] == true I think @NthPortal’s concern about boilerplate is valid. While it’s “ok” to provide the low-level opaque-type and then tell people to build helpers on top using implicit extensions, it would be “nice” to have a concise syntax for common cases: one of which is opaque type + manual conversions to-and-from the underlying/wrapped type. For example, given opaque type Id = Long object Id { def apply(value: Long): Id = value implicit class Ops(val self: Id) extends AnyVal { def value: Long = self } } Would it be possible to extract all the boilerplate def apply/implicit class into a helper trait? def apply implicit class opaque type Id = Long object Id extends OpaqueWrapperTypeCompanion[Long, Id] // comes with `def apply` and `implicit class` If we allowed asInstanceOf to be sloppy and let you cast things between the underlying and opaque types, we could dispense the the special “underlying type can be seen as opaque type in companion object, and vice versa” rule: you want to convert between them, use a asInstanceOf, and be extra-careful just like when using asInstanceOf in any other place. This asInstanceOf behavior may already be unavoidable anyway (from and implementation point of view) if you want to avoid boxing in all cases, e.g. when assigned to Anys, or when put in Array[MyOpaqueType]s. Any Array[MyOpaqueType] And asInstanceOf already has the correct connotation in users’ heads: “be extra careful here, we’re doing a conversion that’s not normally allowed” This also obliviates defining a companion object for the common case of “plain” newtypes, without any computation/validation: just use (x: Double).asInstanceOf[Wrapper] and (x: Wrapper).asInstanceOf[Double] to convert between them. If someone wants to add custom computation/validation, they can still write a companion and do that. In particular, @NthPortal’s examples could then just use casting and not define a bunch of boilerplaty companion objects. (x: Double).asInstanceOf[Wrapper] (x: Wrapper).asInstanceOf[Double] It might be worth exploring a custom syntax to smooth out the very-common case of “opaque type with methods defined on it”, rather than relying on implicit classes in the companion to satisfy this need. People could continue to use implicit class extension methods when extending the types externally, but I think having some operations “built in” is common enough to warrant special syntax. I don’t have any idea what they might look like Yeah, this is my misunderstanding of the value classes spec. Value classes do have to expose a public wrapper and unwrapper to the JVM, but you’re correct that they aren’t exposed to Scala. I’ll edit my post to correct it. For the curious, here’s some Java code that constructs an invalid Mode and is able to access its internals (but this wouldn’t work in Scala): package example; import example.Mode; class Test { public static void test() { Mode m = new Mode(999); System.out.println(m.foo$bar$Mode$$raw()); } } I.
https://contributors.scala-lang.org/t/pre-sip-unboxed-wrapper-types/987/63?u=nthportal
CC-MAIN-2017-43
refinedweb
1,670
50.67
Frequent Print Error Messages error message Technote (FAQ) Question Frequent Print Error Messages Answer This document describes some of the more common printing error messages, and outlines some solutions to resolve these problems. This document applies to AIX Version 4.3 and later. Error messages 0781-003 - 0781-003 field value too long; limit is 20 chars. - With: digest: 0781-017 error in config file /etc/qadm.config, line 169. - mkque: (FATAL ERROR): 0781-277 Error from digester /usr/lib/lpd/digest, status = -1024, rv = 15636. - In this case, the administrator had specified a remote queue name longer than 20 characters. You must limit queue names and remote queue names to 20 characters. 0781-004 - 0781-004 failed receiving acknowledgment Printing to Techtronics printer using LPD protocol. Fixed by adding -T 20 in /etc/qconfig to increase backend timeout. 0781-005 - digest: (FATAL ERROR): 0781-005 illegal value `'. - Other errors included with this are: - digest: 0781-017 error in config file /etc/qadm.config, - mkquedev: (FATAL ERROR): 0781-277 Error from digester - First check for dummy entries in /etc/qconfig and fix - Sometimes fixed by upgrading printers.rte level 0781-008 - 0781-008 cannot open /etc/qconfig.bin for writing. System was busy and had a temporary lock on the file reran command to add or remove queue and it worked. Other option is to manually remove the /etc/qconfig.bin file as root. 0781-009 - Remove /etc/security/failedlogin - Increase size of / directory (90% full) 0781-010 - digest: (FATAL ERROR): 0781-010 cannot open /etc/qadm.config for reading. - digest: errno = 13: The file access permissions do not allow the specified action. - Error occured when member of printq group was trying to manager queues. - /usr/lib/lpd/digest does not have setuid set in AIX 5.1. - Problem is occurring with /etc/qadm.config getting created with 600 permissions, root:printq. - Problem occurs when printq user has umask=077. - Change umask=22. 0781-012 - errno=11 means resource temporarily unavailable. - This may occur after 0781-124. - File may disappear from the queue, but nothing prints. - This message comes from the digest command that is used to convert the /etc/qconfig file to binary form. - The AIX 3.2.5 message associated with 0781-012 is normally "No device line in queue stanza". This normally happens when there is a syntax error in the /etc/qconfig file. 0781-013 - digest: (Fatal error): 0781-013 can not have rq line without hostid line. - Other errors seen together - enq: (Fatal error): 078-119 error from digester /usr/lib/lpd/digest, status=-1024, rv=14882 - digest: 0781-017 error in config file /etc/qconfig - Indicates corruption of the /etc/qconfig file - In this case, the administrator had specified a remote queue name longer than 20 characters. You must limit queue names and remote queue names to 20 characters. 0781-014 - Error in /etc/qconfig file - Cannot have hostid line without rq line for remote queue - Add rq line to point to remote host More: 0781-014 - "enq -d" error from digestor - Along with 0781-017 and 0781-119 - /etc/qconfig had been hand edited - Added a colon in /etc/qconfig and fixed problem 0781-015 - 0781-015, "line 125 in /etc/qconfig file" - With 0781-017 error from digestor "0781-119" - Was missing a colon in /etc/qconfig file. (manual edit corruption) 0781-016 - Error with a missing blank in /etc/qconfig when adding HP4000 queue - Only happened with failed install of printers.hplj-4000.rte 4.3.1.0 - Removed fileset and readded queue. 0781-017 - Error in config file /etc/qadm.config - Occurs when trying to add queues when /etc/qconfig is corrupted - Often caused when queues are removed while job's still queued - Look for dummy queues in /etc/qconfig and remove if they are present. - If lpstat doesn't show any dummies or errors, you may have to rebuild /etc/qconfig. - This can cause qdaemon to go down. 0781-018 - fatal error 0781-018 cannot get gid - Found that someone had removed the user lpd from the system. - bos.rte.printers was not installed. - Installed bos.rte.printers which added user lpd - Separate problem: Missing group printq with gid of 9. 0781-019 - Digest fatal error 0781-019 cannot change ownership /etc/qconfig.bin - Digest error no. 2 a file or directory in the pathname does not exist - With enq error no 0781-119 error from digestor usr/lib/lpd/digest - Only seen adding queue with psf, queue created anyway. - Also: 0423-314 Error was detected when reading file /afpfonts/readme - Problem caused by bad codepage in pagedef. - Also: igest: (FATAL ERROR): 0781-019 Cannot chown /etc/qconfig.bin. - enq: (FATAL ERROR): 0781-157 Cannot open /etc/qconfig.bin for reading. status = -1024, rv = 40232 - digest: (FATAL ERROR): 0781-008 cannot open /etc/qconfig.bin for writing digest: errno = 17: Do not specify an existing file. - nq: (FATAL ERROR): 0781-119 Error from digester /usr/lib/lpd/digest, status = -1024, rv = 112958 - Unresolved, but not dealing with psf. 0781-021 - enq: (Warning): 0781-021 Cannot print directory: /FIX060/ - IX Version 4.3 Base Documentation says errors 0781 021-034 all are associated with enq. - Unable to find problems with directory permissions. remote queue. - With: 0781-228 : message from queuing system. remback error #67 - Socket number is in use. - 0781-228 could not send data file to /var/tmp - Removed queue, and changed name of queue solved this one. - Fatal error: 0781-022: cannot execute /usr/lib/lpd/qstatus - Customer had a crash a while ago and the permissions on the files in /usr/bin do not have the execute bit set for everyone. 0781-023 - enq(fatal error) 0781-023 - 0781-023 unable to close file tH1vkMa engine error number=28 there is not enough space in the file system. - /var was full - Increased size of /var, removed /var/adm/wtmp. 0781-024 - 'Cannot write anymore to /var/spool/qdaemon/spool_name, file has become too large', - 'errno 28 not enough space in filesystem'. - An effort to print a report produced this error on the screen and locked up the terminal. df Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd9var 8190 1456 12% 148 8% /var - Increase /var to solve the problem. 0781-019 - enq: (FATAL ERROR): 0781-025 Unable to open d903rc.afp. - enq: errno = 127: The user ID or group ID is to large to fit in the provided structure. - Occurs when printing files larger than 2 GB - AIX backend commands are not able to handle greater than 2 GB files. - Error likely when using lpr command because it creates a spool copy by default. - In some cases may be able to print with qprt -J flag set. - Best answer is to split file before printing or use a custom backend. 0782-026 - A virtual printer has not been configured for print queue. - Someone created a queue, but not a virtual printer. 0781-028 - 0781-028 unknown printer port. - Happened when adding a printer with a queue that didn't exist on remote server 0781-029 - 0781-029 errno=13 when printing from info explorer - Fixed by removing the -r option from enq print command. 0781-032 - 0781-032 indicates permission problems of some type. - Fixed permissions in directories and /etc/filesystems and error stopped. 0781-033 - 0781-033 no virtual printer queue is created - Occurred when adding Jet Direct printer - This happened because /tmp was 100% full. - Another occurrence when a user was starting non-ibm programs from /etc/rc. 0781-034 - 0781-034 qdaemon appears to be dead - A file or directory in the path name does not exist. - Cleared out /var/spool/lpd/qdir and /var/spool/lpd/stat and restarted qdaemon and printing worked. 0781-035 -. 0781-037 - 0781-037: fatal error when trying to remove print queue - Caused by dummy: entry in /etc/qconfig. Remove dummy entry. 0781-038 - enq -D fails with 0781-038 No device specified - This normally takes the default queue down. - User had multiple queue devices for the default queue. - Removed one queue device and command worked. - the specified connection is not valid - User had entered an invalid port number on 64-port adapter when adding queue. - This was also seen with a JetDirect card where printer neede port 9101 but was set to 9000. - Removed and readded queues with correct parameters. 0781-040 - errno 25 - Changes up = FALSE to up = TRUE in /etc/qconfig - Printed after this change 0781-041 - 0781-041 unable to locate queue on device - 0781-041 cannot specify device with remote queue - User using qprt -P frtoff1:frtoff1 $* for remote formatted queue - "Cannot specify device for remote queues" tells you that you must have a single device for the queue.. - Made the user a member of printq group and he could run the command. 0781-045 - 0781-045: stty is not a typewriter error code - customer setup is aix-->ntprinter - Remote queue was going down. - Increased timeout - ordering u468677 and u468680 and cleaned up spooling system 0781-046 - enq: [fatal error]: 0781-046 illeagal burst page option. - Occured when using qprt -Bn flag. - The -B flag requires two characters. So use -Bnn. 0781-047 - 0781-047 bad job number: 1005 - 0781-310 no such request perhaps is done. Possible workaround: if you have job 5 and 5/1005 in same queue, move 5 (when specifying 5 he will take 5 and not 5/1005) to another queue and then do what you want to do with 5/1005: cancel, change priority. Called customer: not possible to do what he wants ... in none of the actual AIX releases. Workaround is okay when you have sent all the jobs 1,2,3 ... BUT in 'real life' the jobids of the jobs that are terminated are reused ... not more possible to know that / and 5/1005 will be together. Okay, told customer that he may ask for a DCR but he will then have to pay (customer has no SL .. but as FE promised a response, worked on it a little) and, I cannot say if it will be accepted or when it would be ready. 0781-048 - Bad print queue or device name: test on the console - Could happen if file test in /var/spool/qdir - If this comes when doing status, it indicates an erroneous file in the /var/spool/lpd/qdir directory. 0781-049 - enq: 0781-049 Not privileged to use -Z - enq: (WARNING): 0781-032 You must have permission to execute - enq: (WARNING): 0781-029 /var/spool/lpd/dfA436cib10 has no permission. - 0781-142 LAB record must contain 2 elements in the jdf file - daemon warning 0781-154 entry /var/spool/lpd/qdir/00:txtpt2: bad format name changed to to:txtpt2 - LPD had crashed to to excess load in moving jobs to a new server and corrupted the jdf file 0781-050 - 0781-050 bad PRINTER or LPDEST - What is the name of your default queue? ls3p5si_pcl - echo $LPDEST -> nothing - echo $PRINTER -> lspp50_pcl - lspp50_pcl use to be the default queue. - export PRINTER=ls35si-pcl - Now you can print to the default queue. 0781-053 - enq: (FATAL ERROR): 0781-053 Cannot chown tGhkXya. Errno = 1. - enq: errno = 1: Operation not permitted. - Administrator had exported all the /var/spool/lpd directory. - Problems with NFS export and mount as root unless you allow root access from other system or set permissions. 0781-058 - error 0781-054 printing large files - Caused when /var more than 90% full when spooling files. 0781-058 - Unable to close file - errno=28 not enough space in /var - 0781-304 Unable to open /dev/lp8 as standard out for backend errno=6 request to device...not exist 0781-305 retrying errno=6 - Removed files from /var/adm/wtmp, and /var/spool/mqueue 0781-061 - 0781-061 argument -q(queuename) is not accessible - Customer using: lp file -dqueuename - Fixed command: lp -dqueuename file 0781-062 - qdaemon: (FATAL ERROR): 0781-062 Error opening ../stat/p.dae1.@daeps1 - qdaemon: errno = 2: A file or directory in the path name does not exist. - qdaemon dies - Problem after restoring defective mksysb tape - Checked qdir and there were approximately 38,000 files - Cleared the queueing system and it worked. - Another customer loaded AIX 4 drivers on AIX 3 system. 0781-063 - 0781-063 Error reading FileName. Errno = 2 - Extra files in /var/spool/qdir. Removed and restarted qdaemon. - stopsrc -cg spooler; startsrc -g spooler - Could also get this with full /var filesystem 0781-066 - 0781-017:err in qconfig file line .... - 0781-119:err from digestor - Cleaned up /etc/qconfig (Could also remove all queues and devices) 0781-067 - startsrc -g spooler => all started, and got error 0781-067 can't write to device lp0. - Problem with /dev/lp0 permissions or wiring (CTS) 0781-072 - qdaemon: (FATAL ERROR): 0781-072 Unable to fork a new process. - qdaemon: errno = 12: There is not enough memory available now. - Could not print. System was overloaded and running out of paging space. - Buy more memory or reduce load on system. - Customer upgraded to AIX 4.2.1 from 4.1.4 and fixed his problem 0781-074 - qdaemon: warning 0781-074, backend returned warning status - Only root can print - Bad permissions on piojetd and piohpnpf applications. - Permission on /dev at d--------. Changed to 775 and printing worked. 0781-075 - qdaemon couldn't exec /tmp/plotmode - Does /tmp/plotmode exist? - Does it have the right permissions to execute? - files find . -name "*" -print | xargs rm - After reducing files in /var/spool/lpd/qdir, /var/spool/lpd/qdaemon, and /var/spool/lpd/stat and recycling qdaemon, printing resumed. - Other case when network went down and users printing to network printer and files built up. 0781-077 - No device line in queue stanza - digest: 0781-012 No device line in queue stanza - digest: 0781-017 error in config file /etc/qadm.config line 169 - 0781-077 error from digestor - vi /etc/qconfig --removed the stanza for the dummy queue. - stopsrc -gc spooler; startsrc -g spooler - Readded devices 0781-081 - 0781-081 quecsq--->queue down - Problems were with the permissions of a custom backend script. 0781-083 - 0781-083 Failure to read queue directory. I checked /var/spool/lpd and seems to be no errors. - df command I can see /dev/hd2 full - errors: HDISK2 physical volume missing or removed. - Hardware error on hdisk2 0781-084 - Remote printing aix to aix - Started lpd on server - Using alias for client. - Fixed by adding client ip address in server /etc/hosts.lpd - Another customer added -T 60 to rembak in /etc/qconfig - Another: qdaemon: (WARNING): 0781-084 Core image in qdir directory Remove this core file or move it first. - qdaemon: errno = 2: A file or directory in the path name does not exist. - 0781-304 <unable> to <open> /dev/lp0 as standard out for backend <errno> = 16 the requested resource is busy - Problem was with physical printer. 0781-085 - qdaemon: (WARNING): 0781-085 No queue dae9 in /etc/qconfig (name = /var/spool/lpd/qdir/n0root:dae9$#@!qeac). daemon: (FATAL ERROR): 0781-062 Error opening ../stat/p.dae1.dummy. - Fixed /etc/qconfig and removed jdf from /var/spool/lpd/qdir. - Another got corruption of /etc/qconfig when adding remote queue. 0781-086 - Backend /usr/lib/lpd/rembak failed with termination status 015 - Problem occurred when printing to remote system - Remote system /var filled up to 100%. 0781-087 - Error msg: from qdaemon 0781-087 could not run your request - Permission problem on custom backend 0781-088 - Died running request on jetdirect or local - Usually this means the printer needs attention. - Usually the queue goes to DEV_WAIT, or DOWN. - Missing colon files after an upgrade could be the problem's cause. - Also seen with 0782-054 when file system was full 0781-090 - Indicates an error with printer smit files - Ran printer smit rebuild script and this fixed for some. - Another: Found error in the link to /var/spool/lpd/pio/@local fixed it to show the fully qualified name of host. - # ln -s /var/spool/lpd/pio/@local "suw_test.mydom.myguys.com" - Another: 0781-090 Request /u01/app/delfour/del4/work/faxlp/07211739m36aee has finished on device lzr55:hp@slzr55 - The alias database /etc/aliases.pag is out of date.. Check permissions on /var/spool/lpd/qdir files. - Problem with username and /etc/passwd file lpd entry. changed to lpd:!:9:4294967294::/: fixed problem - Another: qdaemon: (WARNING): 0781-093 Bad getjdf on name n0rec:label1$#@!uU7b. - daemon: (WARNING): 0781-304 Unable to open /dev/lp0 as standard out for backend - qdaemon: errno = 6: There is a request to a device or address that does not exist. - This was a cabling problem on lp0 printer. 0781-094 - qdaemon: (FATAL ERROR): 0781-094 Could not set process credential (pcred - errno = 14:A memory address is not in the address space for the process. - The only problem is that the format of the jdf file is incorrect after a migration to AIX 4.3.3 with jobs still queued. 0781-097 - qdaemon: (WARNING): 0781-097 Cannot get real uid. - qdaemon: errno = 2: A file or directory in the path name does not exist. - Errno = 2: A file or directory in the path name does not exist. - qdaemon is in the hung state when running user script to move files. - APAR IY17646 was created for AIX 4.3 problem - Another time fixed by checking userid and groupid of lpd in /etc/passwd. 0781-101 - 0781-101 Job number must be numeric between ? and ? - No hits in database 0781-102 - 0781-102 Fatal error invalid printer name test - Removed extra files in /var/spool/lpd/qdir to solve problem 0781-103 - 0781-103 Invalid option: - No hits in database 0781-104 - 0781-104 No queue devices detected. - No hits in database 0781-105 - Queue status (fatal error: 0781-105 process failure) New - Seen while doing lpstat at AIX 5 when extra files in /var/spool/lpd/qdir. - At AIX 3.2.5 with bsd stat filter this error occurs for some print servers. Make this change: #cd /usr/lib/lpd #vi bsdshort #----Set the state machine to read in messages and header lines STATE = 0 insert this -- RS = "[\0\n]" QNAME 0781-106 - 0781-106 RAW REMOTE STATUS (FILTER, /usr/lib/lpd/axishort NOT FOUND) - Reinstall the fileset printers.rte. 0781-108 - 0781-106 RAW REMOTE STATUS (FILTER, /usr/lib/lpd/axishort NOT FOUND) - 0781-108 END OF RAW REMOTE STATUS. - Reinstall the fileset printers.rte. - IY01608 is now fixed in U466280 4.3.3.0 FRONT END PRINTER SUPPORT 0781-109 - Filename format is bad errno 2 - errno 2 indicates no file or directory. - This problem occurs most from third party applications that may delete a file before printing finishes. Example application: progress. - Using lpr so that job gets spooled will solve this. 0781-111 - No login name found - This problem often occurs during the process of trying to cancel jobs to remote system, especially with Windows/NT servers when using AIXSHORT for stat filter. Be sure to use bsdshort and bsdlong. - Clear unwanted files out of /var/spool/lpd/qdir. These frequently give this problem. - Remove /etc/qconfig.bin, and rerun lpstat. 0781-112 - Problem with changing virtual printer attributes l=0. - Set l to a very large number to avoid sending as pages. - May be related to limits fields. 0781-113 - 0781-113 /etc/qadm.conf error line 113 - Remove dummy queue from /etc/qconfig. 0781-114 - 0781-114-->> Cannot cd to /var/spool/lpd/qdir, Directory does not exist. 0781-117 -. 0781-118 - 0781-118 error from digester /usr/lib/lpd/digester, states 1024 - 0781-008 cannot open /etc/qconfig.bin for writing. - Caused by 3rd party application Pdaemon. 0781-119 -. - enq: errno=2 a file or directory in the pathname does not exist - errno 28 writing dir /var/spool/lpd/qdir - or hardware problem - Bad permissions on the /var/spool/lpd/qdir directory or on /var/spool/qdaemon can be the cause. - Can't create temporary file - enq: errno=2 a file or directory in the pathname does not exist - Permission problems - Directory was full - 0781-123: printer odm corrupted - Ran rebuild printer odm script - Another: full /var and broken link from /bin to /usr/bin - Failure to fork process - Occurs when lp tries to fork a process to execute the enq command to print a job - The lp command (as well as qprt, lpr, and lpd) are frontend commands that just execute enq. There are two reasons that the fork process would fail. - The total number of processes has been exceeded on the system. Increase number of processes from smit. - There is not enough memory. Buy more memory, or reduce load. - 0781-126 read failure in /var/spool/lpd/stat/numfile - 0781-469 when sending print jobs. - Customer cleared spooler (over zealous) - Probably could have just removed the numfile which sets job numbers. - 0781-132 when printing - 0781-312 Request hosts removed from queue hp5a:hp@a_hp5a. Could not open or stat file /etc - Problem with one problem was bad /dev/null rm /dev/null mknod /dev/null c 2 2 chmod 666 /dev/null - 0782-133 opening the odm class sm_cmd_hdr failed (odm errno:5908) the print queue and test:@jx03 - Frequent problem after upgrades from 3.2.5 to early AIX 4 /var/spool/lpd/pio/@local/smit ls -l file lengths are 0 cp /usr/lib/objrepos/sm* /var/spool/lpd/pio/@local/smit ls -l file lengths are normal - 0781-137 error in adding an object to the odm class "sm_cmd_opt" - errno=0 the print queue name p13-anc: @192 use local problem procedures - Rebuild odm script failed - /dev/hd9 was full and /var/spool/lpd/qdir/@local was corrupted. (/var) - chfs -a size=+24576 /var - Another problem was that emulator was giving problem with @local symbol. - Applied the latest 4.3.3.05 maintance level - qdaemon: (WARNING): 0781-140 Size record must contain 4 elements in Job Descr. File - qdaemon: (WARNING): 0781-154 Entry /var/spool/lpd/qdir/n0rec:label1$#@!uU7b: bad fo rmat. - qdaemon: (WARNING): 0781-093 Bad getjdf on name n0rec:label1$#@!uU7b. - qdaemon: (WARNING): Queue lsrddoc:lp5 went down, job is still queued: - Backend Exit Value: EXITFATAL (0100) - qdaemon: (WARNING): 0781-304 Unable to open /dev/lp0 as standard out for backend. - qdaemon: errno = 6: There is a request to a device or address that does exist - qdaemon: 0781-305 Retrying.. - errpt pointed to cabling, but corrupted jdf must have been created by bad process. - Cleared jdf, fixed cabling. - qdaemon: (warning): 0781-142 log record must contain 2 elements in job description file. - qdaemon: (warning): 0781-154 entry /var/spool/lpd/qdir/n0:lp10$#@!qXjae: bad format name changed to t0:lp10. -. - Another: lpd crashed and corrupted jdf file. -. - 0781-152 which corresponds to job # record is missing from path - Found that there were 6000 jobs queued and performance was very slow. - Ship the IY00411 to fix PSF performance problems. - See: 0781-140 - See: 0781-142 - Problem with jdf file - One problem was with corruption from ADSM backup of qconfig - 0781-156 /etc/qconfig does not exist - Bad permissions on /etc/qconfig 'chmod 644 /etc/qconfig' - Replace /etc/qconfig from mksysb - Usually point to sloppy admin policies - 0781-157 CANNOT OPEN/ETC/QCONFIQ.BIN FOR READING - Customer: /usr/lib/lpd and all files in have root for owner and group sys should be group printq. Changed and fixed. - Another: Permission problems with /etc/host caused JetDirect printing not to work. - Another: Permission problems with /etc/qconfig.bin and /var/spool/lpd/stat files. - Another: Bad nfs mount caused problem - 0781-159, read error 0 on /etc/qconfig.bin - Corruption in /etc/qconfig - Another: Out of space in / file system - Another:Bad permissions on /etc/qconfig.bin caused by bad enq module. -rw-rw-r-- 1 root printq 3816 Aug 05 14:57 /etc/qconfig -rw-rw---- 1 root printq 13027 Aug 06 16:00 /etc/qconfig.bin - Another: Reinstalled (a little harsh) - Another: Updated these filesets printers.rte (better) - Another: Both / and /usr were 100 percent full. - Upgraded from 4.2.0: 0781-160 - Corrupted/etc/qcnfig.bin,blength =1694507008 - Had not rebooted since upgrade; rebooted and printing worked fine - accidentally been changed. - Cannot awaken qdaemon - One cause is full /var file system - Another cause is bad /etc/qconfig file - A bad file being printed was one scenario. Cancel, stop and restart the qdaemon. - In other scenarios there were bad files in /var/spool/lpd/qdir. - One fix: simply recycle qdaemon (stopsrc -s qdaemon, startsrc -s qdaemon). - Another fix: clear queues and recycle qdaemon. - Another possible cause: permission problems on /var/spool/lpd/stat - In one case the writesrv was not active. - 0781-163: cannot awaken qdaemon Errno 2. - ls /var/spool/lpd/qdir/* showed over 13500 files - Removed files and was able to start qdaemon. rm /var/spool/lpd/qdir/* -> ksh errors out with parameter too long. cd /var/spool/lpd/qdir find . -name "n0*" -exec rm {} \; and other files later - 0781-166 could not get hostname - Using DNS, but nslookup showed reverse name not working right - Should have fixed DNS, but instead added /etc/netsvc.conf with line hosts=local,bind - Updated: bos.rte.printers and printers.rte and fixed problem - Another just restarted the qdaemon. - lpstat: enq 0781-167: could not get process credentials - Permissions on enq were wrong: chmod 6555 /bin/enq - Permissions had inadvertently been set to the wrong value for all files in /usr/bin. - qdaemon can't signal, qdaemon can't awaken - Dummy stanza in /etc/qconfig: remove to fix the problem - Error occurred with: 0781-162 - 0781-176 Assignment statement found before queue name - 0781-182 Problem with line 69 in /etc/qconfig: -aup=TRUE - Restored backup of old /etc/qconfig and it worked. - Removed these four lines from existing /etc/qconfig -aup=TRUE: device=dummy^? dummy^?: backend=dummy^? - Device stanza does not match - Remove dummy entries from /etc/qconfig - 0781-179 failure writing to file - Not enough space in filesystem. - df -k, showed root 100% full. - chfs -a size=+8192 / - Able to print with larger / filesystem - 0781-180 the queue, queue_name, already exists in qconfig file. - Removed dummy entry from /etc/qconfig - Another: vi /etc/group nobody: : guest,nobody,lpd - removed guest from this line and problem fixed. - mkquedev: (FATAL ERROR): <0781-181> The queue, lp$2, does not exist in qconfig - Problem was using the dollar symbol in queue name - Also created dummy entry in /etc/qconfig - Another had problems created by using different editor on /etc/qconfig - See 0781-176 - rmque: (FATAL ERROR): 0781-183 Cannot delete script: Queue contains devices - Must remove queue device first - Manually remove queue and dummy stanzas - Problem in Lexmark script for creating queues if a device name already exists. - Find the the line in /usr/lpp/lap/_mklanprt that begins with QDEVICE_COUNT= - change the "-f2" parameter to "-f1" QDEVICE_COUNT=`lsallq -c | cut -d':' -f1 ........ - Device stanza does not match - Remove dummy entries from /etc/qconfig - rmquedev: (FATAL ERROR): 0781-189 Queue:device, - 0782-318 EP8000-CPRM-1 is not a valid queue name. EP8000-CPRM-1:/hp@P-EP8000-CPRM-1: not found in qconfig file. Not deleted. - rmque: (FATAL ERROR): 0781-192 Queue EP8000-CPRM-1: not found in qconfig file. - Queue was missing from /etc/qconfig, but the colon files still remained. Manually removed these. - One time / filesystem was 100% and /etc/qconfig was 0 length - 0781-190 (queue) not found in the /etc/qconfig - 0782-597 Value sa .. not in ring list - Tried to set Z=! with lsvirprt - Queue was created wrong by Intel Netport script. - Removed and added remote queue to fix - chquedev: (FATAL ERROR): 0781-191 The queue:device, bn60:@HP4P_COMPTA, d does not exist in qconfig file. - Other errors - enq: (WARNING): 0781-039 Qdaemon appears to be dead. - enq: errno = 2: A file or directory in the path name does not exist. - enq: (WARNING): 0781-162 Cannot awaken qdaemon (request accepted anyway) - enq: errno = 2: A file or directory in the path name does not exist. - enq: errno = 3: The process does not exist. - enq: (WARNING): 0781-174 Cannot signal qdaemon. Errno = 3. - enq: errno = 3: The process does not exist. - enq: (WARNING): 0781-162 Cannot awaken qdaemon (request accepted anyway) - Define printer : bn65 Server : bn65.noh.be.solvay.com Queue : PASSTH - enq: (WARNING): 0781-174 Cannot signal qdaemon. Errno = 3. - enq: errno = 3: The process does not exist - qdeamon get's errors if too many files sent with one print command. - Could be corrupt /etc/qconfig - rmque: (FATAL ERROR): 0781-192 Queue lp$2: not found in qconfig file - See 0781-181 for similar problem (don't use $ in queue name) - Also the Lexmark error listed in that item. - lsque: (FATAL ERROR): 0781-193 Queue prt01: not found in qconfig file. - Goes with 0781-192 except this is list queue error. - lsallq: (FATAL ERROR): 0781-194 Syntax error in qconfig file. - Indicates corruption in /etc/qconfig file - lpd fatal error 0781-196 lock file or duplicate daemon" - User had manually started a second lpd; stopped and ran startsrc -s lpd. - lpd fatal error 0781-198 unable to name socket - lpd: errno=67 socket name is already is use - Found line in rc files: /usr/spool/lps/adm/lps.rc start. This was added by the easy spooler. You can't run both at the same time. Start the lps.rc.start file later manually. The order of start matters. - 0781-199 unable to create socket - tcp/ip daemons were not started in /etc/initab - Another: Cloned machine TCP and kernel were mismatched. - 0781-200 failure to accept a socket connection - 0781-204 lost connection. - 0781-198 unable to name socket. The socket name is already in use. - Applied TCP/IP server code update. - Another: /var/spool/qdaemon was filled with old files, removed and printing worked again. - 0781-201 ill formed from address - Usually indicates source port numbers not in the range 721-1024. RFC design for LPR/LPD printing. - Server was using port 7210. Fixed server code. - Interim Fix is available, also AIX 5 has a flag for this. - This usually means that some application other than rembak or other valid lpr program is trying to contact the lpd. - Duplicate with tn prt_host 515 - Must follow RFC 1179 - Failure to accept a socket connection 0781-200 - Socket name already in use 0781-198 - Couldn't start lpd - Clean up all spooler files to fix - Your host does not have line printer access. - This means that the client name is not in the server's /etc/hosts.lpd or /etc/hosts.equiv file. - This problem also occurs with nameserver and routing problems. - (FATAL ERROR) 0781-204 lost connection. - lpd: errno = 73: A connection with a remote socket was reset by that socket. - Problem with TCP/IP connection between client and server during printing. - 0781-205 Request recvjob queuename from lpd - Warning message in debug lpd log file. Not a problem, just remove the -l flag from rembak. - 0781-208 Unknown printer on a remote host - Add queue with same name on remote server, or change remote queue name on client to one that exists on the remote server. - 0781-210 ill-form formed message - Indicates that remote server sending RST. - Turn off logging by removing -l from /etc/qconfig - lpd: (FATAL ERROR): 0781-211 could not exec /bin/enq. - Customer had changed lpd UID from 9 to 104. (Change back and fix) - 0781-212 lpd protocol is received when the source host sends a FIN|ACK and no data is received. - Problem printing from AS/400 when it sends Control file first - Change AS/400 to not send CFF, or upgrade to AIX 4.3.1 or later. - 0781-213 ,erro write to lpd not enough space - Usually indicates the /var is full. You might must need to increase the /var filesystem. Check for the /var/adm (like wtmp) files associated with accounting that get to big and clean them up, or add a separate /var/spool/lpd filesystem. - lpd: 0781-214 terminating - this errlog usually indicate that the system resource controller, srcmstr, detected that a subsystem under its control, in this case lpd, terminated. - Generic error. Can be TCP/IP, System, or printing problem - Look for other errors in system resources. Could be paging or running out of memory or CPU. - lpd (FATAL ERROR): 0781-219 No file name: request from not printed. - lpd: errno = 112 : Cannot find the requested security attribute - points to the client not sending in the name of the file that is to be printed. - Use iptrace or lpd logging to find problem. - lpd: 0781-221 Could not set process credentials - lpd: errno = 112: cannot find the requested security attribute - lpd: {fatal error} 0781-221 could not set process credential - After upgrade from 415 to 431 server - 0781-222 getsockname (): src not found, continuing without src support. errno=57" - User had manually started a second lpd; stopped and ran startsrc -s lpd - 0781-224 failed receiving acknowlegment. - Usually fixed by increasing timeout of rembak. - (FATAL ERROR) 0781-225 Error: libqb's log_init returned -1. - rembak: errno = 9: A file descriptor does not refer to an open file. - Increased timeout, and fixed remote queue name. - Seen on AXIS print server. Set remote queue name to LPT1. - 0781-227 errors with /etc/qconfig - 0781-012 - Deleted dummy entries in /etc/qconfig - errno=8: A system call received an interrupt. - Could not Ping HP JetDirect Card. Fixed network problem and printed. - errno=25: Specified file does not support the ioctl file - Could not send data file /var/spool/qdaemon/ - Changed some tcp/ip software on the mainframe side. (interlink TCP/IP) - rembak : errno = 25: Inappropriate I/O control operation. - The job gave an error with lpr, but worked with qprt. This is probably a problem with the header flag sent to the MVS interlink LPD. - Error #67 socket already in use - Printing to HP JetDirect EX3 as remote printer - Increased timeout on rembak with -T seemed to help - 0781-230 Could not send control file. - Occurred printing to LexMark printer with rembak - Increasing timeout to rembak fixed the problem. - 0781-232 unknown host pr1 - Hostname was wrong in /etc/qconfig queue definition. - Unknown host & ipaddress - Added address in /etc/hosts and printed fine - rembak : fatal error: 0781-234 unknown service for printer/tcp - rembak : errno=2 : a file or directory in the pathname does not exist - /etc/services file was missing this line printer 515/tcp spooler # line printer spooler - rembak: Fatal error: 0781-236 lost connection reading socket. - IX44693 at AIX 3.2 for remote printing sometimes fixes - Also can be problem with remote lpd - Status on .... failed errnum=2, rembak=2 a file or path does not exist - This looks like a missing file, probably from the application. The application may be overwriting the file before it prints. - Check file size for 0. - Check permission on /usr/lib/lpd/rembak, /dev/null - Add -c flag to lp command so the file is not overwritten. - 0781-241 - socket already in use. - Customer rebooted to reset all sockets. - Probably could have checked sockets with 'netstat -a' and kill process. - Fatal error 0781-0244 file acknowledgement sent but not received; Printer spooler dies - rembak: errno = 4: A system call received an interrupt. - xyplex terminal server and others: - Set backend = /usr/lib/lpd/rembak -T50, which increases the timeout on rembak by adding the -T50 flag in /etc/qconfig. - For remote with local formatting, set in rembak flags in /usr/lib/lpd/pio/etc/piorlfb. typeset piorlfb_rbflags="" # rembak flags Change to: typeset piorlfb_rbflags="-T30" # rembak flags or typeset piorlfb_rbflags="-T60" # rembak flags - You may need to adjust this number (The timeout) upwards. - Use swcons to trap the error messages. - 0781-249 filesystem /tmp full - Checked with df and found /tmp full - chfs -a size=+24576 /tmp - rembak: 0781-253 : Bad job number: 0. - Trying to remove print job that was on remote system - backend error 0781-254. - Customer had added -t=50 for piorlfb. Should be capitol T - rembak: (FATAL ERROR): 0781-254 No print server specified. Conflict with system hostname and DNS entry. - Another cucstomer had reset print server in middle of job. - With some printer servers you will get this error if the remote queue is specified wrong. - rmque failed 0781-265 queue not empty. - Removed jobs from queue and could then remove the printer - Another: ls -al /var/spool/lpd/qdir there is one zero length file. Removed file, manually removed queue from /etc/qconfig, removed associated stat files from /var/spool/lpd/stat, removed virtual printer files from /var/spool/lpd/pio/@local/custom and ../ddi. - Others: Removed dummy stanza from /etc/qconfig as queue name was already gone. - remq: 0781-267 invalid parameter 'host= ' - Missing 'host' command on system. - lsallq -> 0781-271 cannot lock /etc/qconfig system lock table is full (error 49) - Related to mounting directory and nfs locks - 0781-272 error from digester /usr/lib/lpd/digest - 0781-017 error in /etc/qadm.config - 0781-012 no device line in queue stanza - Removed dummy stanza from /etc/qconfig - Occurred when adding a queue with mkque - /etc/qconfig file corrupted - Fixed /etc/qconfig problems and could add a queue - Look for dummy queue stanzas in /etc/qconfig - 0781-279 cannot open /var/spool/lpd/stat/s.min4247.b.dummy for writing. file or directory in the path name does not exist. - Looks like queue min4247 had a dummy device stanza - Device server name was not specified on device line for queue. - Incorrect spelling was in the hostname. - rembak fatal error 0781-288 could not send data file. - lslpp -l bos.rte.printers => 4.3.2.0 needs 4.3.2.4 U464019 - lslpp -l printers.rte => 4.3.2.0 needs 4.3.2.2 U464190 - qdaemon dies with error 0781-289 (Cannot lock /etc/qconfig.bin for reading) - Apply IX82789 - 0781-290 qdaemon terminating because of signal - cat /etc/hosts > /dev/lp0-->0403-005 cannot create specified file - rmdev -l lp0-->defined; so ran mkdev -l lp0 - Removed and readded printer. - qdaemon 0781-291 terminated normally. - OSLEVEL:4.3.2; applied update - fatal error 0781-296: invalid number of copies 0 - Error in script that uses lp -o option - ANother had error in script setting lpr -#0 flag - qdaemon: (FATAL ERROR): 0781-297 Cannot open /var/spool/lpd/stat/s.dae1.dummy for writing. - Cleared the queue -. - Can't open or create file - This is usually a CTS problem. There is no signal from the printer to AIX CTS. Probably there are cabling problems, or the printer doesn't support RTS. - 0781-304 Unable to open /dev/lp8 as standard out for backend errno=6 request to device...not exist - 0781-304 Unable to open /dev/lp0 as standard out for backend, occurred after migration upgrade when device lp0 didn't exist. Add device. - Also occurred when database printed directly to file. fuser -k /dev/lp0 fixed this. - Also seen when lock file was busy - Cheat method around is to add the printer as a tty, but this can have flow control implications if printer goes offline. - Another: 128-port cfgmgr says devices.mca.8ee4 needs to be installed.: rebooted - Retrying message, comes with 304. - 0781-305 Unable to open /dev/lp0 as standard out for backend. Resource is busy - Removed and readded device and it started working. - qdaemon warning: 0781-304 Unable to open /dev/lp0 as standard out for backend - errorno=16 device busy - qdaemon warning: 0781-305 retrying - daemon: 0781-306 successfully opened /dev/lp0 - Customer was printing from UNIVERSE print spooler - Printer cable exceeded RS/232 specifications. - "Error writing to /var/spool/lpd/stat/pid" Error=28 - "There is not enough space in /var filesystem" - enq: (WARNING): 0781-310 No such request in any local queue - Trying to delete file which is already on remote server. - Cannot open or start files. - Often only root can print. - Permissions problem, sometimes on /dev/null, or printer programs such as piobe, qdaemon, enq, lp, lpr, and so on - Can be on /dev/null if File = /dev/null. - Check permissions of any log files or temporary files as well as all backends and printing programs. - Bad permissions on /var/spool - enq: (WARNING): 0781-313 Cancel all not supported on remote queues. - Customer trying to use: qcan -X -Pxxx which will not work with remote queues which must be canceled by job number. qcan -x jobnumber, all works fine - enq -isA <==only shows the queue status on the AIX side. - Error message from third party install queue script: - Contact emulax for assistance with their script. - 0781-317 queue not empty cannot rename when trying to remove. - Removed dummy entry and queue stanze by hand with vi. - Removed JDF file from /var/spool/lpd/qdir. - 0781-318 Received a data file of unspecified or wrong size, or data file before control file. (From Windows NT). This situation is described by microsoft with the document Q150154 at the URL: This situation is solved by the following APAR: IX77946 and IX78857. - When trying to create a virtual printer on the command line user gets error: 0781-318 not a valid queue name. - ACTION TAKEN: What command using: mkvirprt -d lp0 -n /dev/lp0 -q asc -s asc -t 4019 - 0781-321 the mkqueue command has failed - Seen when dummy stanza in /etc/qconfig in 1998 - 0781-330 enq warning, this command not supported for remote queues. - Found when trying to move jobs from local queues to remote queues. - /usr/bin/enq -Q '5si' -P '3130' from command line worked - qmove -m -u received error 0781-330 enq warning request not supported by remote side of queue. Must move them on server. - Found a command to bring a job out of held status didn't work on remote queue enq -p -#jobnumberk - 0781-334 remote host refused a connect operation - LPD wasn't running on the server. - qdaemon: (FATAL ERROR): 0781-341 Unable to create message queue. - Errno = 28. - qdaemon: errno = 28: There is not enough space in the file system. - "Unable to create message queue" portion of the error qdaemon was reporting, did an ipcs command and found >144,000 ipcs message queues allocated for a uid that had no processes running. I used rmipcs and cleaned them up, and the qdaemon started - qdaemon: (FATAL ERROR): 0781-342 Error in message receive. Errno = 36. - qdaemon: errno = 36: An identifier does not exist. - Fixing name resolution fixed this problem with one customer. Use local problem reporting procedures, or press Enter to continue. - This was a 3.2.5 problem and an upgrade fixed the problem. - 0781-345 could not find job#615 - 0781-162 Cannot awaken qdaemon request accepted anyway. - Cleared queue - error 0781-346 job is already running unable to move job. - Got message when queue in DEV_WAIT state and wanted to move job. - Disabled queue, canceled job, queue went DOWN but job still queued, moved job. - 0781-350 Job %d not found -- perhaps it's done? - The jobs from warning message come up with every printjob. His system is completely new installed with 4.3.3. - 0782-026 A virtual printer has not been configured for print queue and queue device imp1:lp3. Refer to the mkvirprt command, or Use local problem reporting procedures. - qdaemon: Job number 660 (84660) has been deleted from the queue. - qdaemon: (WARNING): 0781-304 Unable to open /dev/lp0 as standard out for - qdaemon: errno = 6: There is a request to a device or address that does not exist. - Removed jobs from /var/spool/lpd/qdir, and then removed and readded the queue properly. -. - Another: Caused by using a numberic queue name. - Unable to list another spooler's queues - 5010-452 -> Cannot communicate with the communication daemon on port. - Seen only with infoprint manager - The namespace contained a printer, "printer2", that was corrupt. - shut down both servers and removed the printer2 entry from /var/pddir/default_cell/printers and restarted the servers after 25mins both servers were working correctly. - The "most up to date backup" that was installed yesterday was of IPM v3.1.0.10. The other server was running IPM v.3.1.0.42, I had George install this ptf and verify that the servers were running at the same code level. after doing this the GUI's reported all printers correctly. - 0781-357 connection to server failed - remback errno=4 a system call recieved an interrupt - Problem seen once with lpr - possible permission problem. - enq: (FATAL ERROR): 0781-360 Cannot alter, hold, release, or move another spooler's job. - Seen with PSF and aix mixture. Required to infoprint manager queue - 0781-362 cancel all not supported on another spooler's queues. - qcan -x job# -P qname works - enq -x job# -P queue also works - enq -x job# didn't work for job. - No solution was found - only seen once - enq: (fatal error): 0781-364 Job Submission Failed Using infoprint manager, no local queues 0781-208 Unknown printer on a remote host pdmsg 5010-527 UWASP1 5010-527 Cannot find printer. - PSM printer name was wrong. - Problem also seen trying to use cancel command on infoprint queue. - All problems logged appear to be with infoprint manager. - enq: (FATAL ERROR): 0781-372 There are no queues defined in the /etc/qconfig file. - Empty or corrupted /etc/qconfig - Timed out" REMBAK: 0781-374 Remote Host Refused Connection - Check to see if print server lpd is running. - Does AIX have permission to print to host? - Is there space on remote server? - If remote server is AIX, run "lssrc -s lpd" to see if lpd is running. - 0781-375 Connection to server failed - This occurs with Intel NetPort when at AIX 4.3.1. Upgrade AIX bos.printers and bos.rte.printers. - 0781-376 cannot specify more than 72 jobs. - Looked over this error and found out that on 4.3.3 as queue setup as remote can only queue up 72 jobs per print request because of the lpd deamon that is the number of permutations we can get on datafile names. This is set by rfc1179. If we setup the queue is local we are able to queue up to 100 jobs per print request. So essentially Level 3 (Development) is saying that at 4.2.x and below we were wrong and sent 100 jobs to a remote queue, and that they had to fix it to comply with the RFC for Remote Printing. So now it is correct and before we were wrong. These limits are hardcoded. - enq: (FATAL ERROR): 0781-377 Cannot specify more than 200 command line arguments - 0781-376 Cannot specify more than 100 files per print request. - Work around is to write a script to print files in batchs. - 0781-379: Could not send control file first - Reset rembak to not send control file first. - 0781-380 Failed to retrieve a socket with a privileged address; - rembak: errno = 13, the file access permissions do not allow the specified action - Make the change to permissions: chmod 4550 /usr/lib/lpd/rembak. - sendjob: (FATAL ERROR) 0781-390 Job 533 (tmp-print.prosuper00:00:10) failed. - Problem was only occuring on one Microplex box. Compared all of the settings in the Microplex box with another Microplex (also model M202Plus) with another Laserprinter: Found 6 options for logpathtype (job, user, pagecount, checksum, printer, I/O port) which ar enabled at the failing box which are disabled at the box without problems. After disabling those options, the problem disappeared. Enabling them again brought back the problem. - Another: applied latest fixes to 4.3.3 and appear to no longer have printer probs. - Another: 0781-390 (FATAL ERROR) 0781-390 Job /etc/hosts failed. 0781-244 file acknowledgement sent but not received; 781-088 queue went down job is still queued. AXIS server had been added as JetDirect. Removed and added as remote printer. local filtering, rqname=pr1 RAW data handling - Another: Increased -T nn value for piorlfb and rembak. - See also 0781-391 - Another: LPD not running on SCO, and then /etc/hosts.lpd not setup right. - WARNING) 0781-391 Perhaps the file is too large for the server to handle. - (WARNING) 0781-244 Failed receiving acknowledgement. - (WARNING) 0781-228 Could not send datafile /etc/hosts to server support. - (FATAL ERROR) 0781-390 Job /etc/hosts failed. - SCO UNIX V3.2.4.2 - send PUSH|ACK with 02 instead of 00. - Updated to bos.net.tcp.client 4.3.3.15 and bos.net.tcp.server 4.3.3.15. - Received data of unspecified length or control file before data file - This occurs at AIX release 3 after the addition of lpr streaming. - The fix is to install PTFs U457367 and U455988. - There is also a Microsoft fix: see . - See 0781-126 - Cannot create temporary file" - Error no=28: not enough space in file system - /var File system was full - Permission denied for printing to lpd - Caused by back name lookup - Added client name to /etc/hosts.lpd - Added client name to /etc/hosts or to name server - Added line to /etc/netsvc.conf saying: hosts=local,bind if not added to name server. - After name resolution cleared up, printing was possible. - 0782-002 There is not enough memory available now. - Seen only with 7318 P10 printers with 4.1.5 code. 0781-120 0781-121 0781-123 0781-124 0781-126 0781-132 0781-133 0781-137 0781-140: 1 instance 0781-142 0781-148: 1 instance 0781-152: 1 instance 0781-154 0781-156 0781-157 0781-159 0781-160 0781-161 0781-162 0781-163 0781-166 0781-167 0781-174 0781-176 0781-177 0781-179 0781-180 0781-181 0781-182 0781-183 0781-188 0781-189 0781-190 0781-191 0781-192 0781-193 0781-194 0781-196 0781-198 0781-199 0781-200 0781-201 0781-202 0781-204 0781-205: Warning error only 0781-208 0781-210: Log warning 0781-211 0781-212 0781-213 0781-214 0781-219 0781-221 0781-222 0781-224 0781-225 0781-227 0781-228 0781-230 0781-232 0781-233 0781-234 0781-236 0781-239 0781-241 0781-244 0781-249 0781-253 0781-254 0781-265 0781-267 0781-271 0781-272 0781-277 0781-279 0781-284 0781-288 0781-289 0781-290 0781-291 0781-296 0781-297 0781-300: 1 instance 0781-304: frequent 0781-305 0781-306 0781-307 0781-310 0781-312 0781-313 0781-316 0781-317 0781-318 0781-321 0781-330 0781-334 0781-341 0781-342 0781-345 0781-346 0781-350 0781-352 0781-357 0781-360 0781-362 0781-364 0781-372 0781-374 0781-375 0781-376 0781-377 0781-379 0781-380 0781-390 0781-391 0781-398 0781-469 0781-721 0781-2020 Messages in piobe.cat 0782-002 0782-004 0782-006 - 0782-006 K not supported for the printer file type - Answer here is to use -p17 or -p16.7 if it is a PCL printer to get condensed print. Usually use in conjuction with -v8. - 0782-006 -T flag is not supported for the print file type - Answer here was customer had added -T flag to piobe in the /etc/qconfig file. 0782-018 - 0782-018: Cannon open digested database file for printer - 0403-005: There is a request to a device that does not exist. device lp2 0403-005 cannot create the specified file - Added the device and set baud and printing worked. - Copied queue definitions from another system without properly creating devices in another case. 0782-024 - 0782-024: The piobe command must be invoked only by the spooler's qdaemon. - Problem trying to cancel job. - Found a dead.letter file that was create in qdir directory when trying to mail to mainframe that had no mail server. - One solution is to replace /usr/bin/write with a shell script to avoid sending mail. mv /usr/bin/write /usr/bin/write.bin create shell script /usr/bin/write --- start script PID=`echo $$` PPID=$(ps -fp $PID | grep -v PPID | /usr/bin/awk ' { print $d }'` ps -fp $PPID | grep qdaemon if [ S? == 0 ] then exit 0 else /usr/bin/write.bin $* fi --- end script --- chmod +x /usr/bin/write -- Save a copy of /usr/bin/write as it will be overwritten -- in future upgrades. - You can also redirect users mail by redirecting mail messages by changing rewrite rules in /etc/sendmail.cf 0782-025 - 0782-025 Unable to open d903rc.afp. enq: errno = 127: The user ID or group ID is to large to fit in the provided structure. - Problem occured printing larger than 2 Gig file and spooling. - Customer was using lpr which does spooling by default. - Use lpr -s to suppress copying file to /var/spool/qdaemon. - Also saw an error like this after upgrade to 4.3.3 with a host named "dev'. 0782-026 - 0782-026 virtual printer has not been configured for print queue. - Someone created a queue, but not a virtual printer. - Another: 0782-626 opening the odm class "sm_cmd_hdr" - Ran the rebuild printer odm script 0782-029 - 0782-029 Cannot open a temporary file - Permissions on /tmp were 755, bin:bin - Another: permissions on /tmp were 700. - chmod 1777 /tmp 0782-031 - 0782-031: Value for input datastream not recognized. - Check the value of the -d flag for qprt or the setting in the virtual printer. - Make sure value set has an i attribute in the vitual printer. For -da there should be ia, for -dp a ip, etc. 0782-033 - 0782-033 trying to create JetDirect queue. - The /tmp filesystem was 100% full 0782-035 - 0782-035 cannot devide by zero with %/operator in database attribute - ttribute name is wH, attribute value string is 8 - Customer was using qprt -p instead of -P in one case. - 0782-035 when printing after power outage. - Cleaned out /var/spool/lpd/qstat and qdir and restarted qdaemon. - Another had file system corruption after a crash. 0782-040- frequent - 0782-040 Cannot open print file. - User had custom shell backend with bad permissions - lp -dhpzone1 -o nobanner smit.log in AIX 5.2 Not proper command for AIX print subsystem, could use with system V printing, or change to use qprt -Bnn, or lpr -h. - 782-040 Cannot open print file. The file name is lan lp -o lan -c -dcrud -s /etc/motd Same problem -o commands not defined for AIX virtual printer. Use sysV if you want to use. - Can change /usr/bin/lp to a shell script and have it call enq. 0782-041 - 0782-041 Cannot access print file - Same as 0782-040. When printing from Unidata, acn change NHEAD setting from 'o nobatter' to 'U_IGNORE' in the $UDTHOME/sys/UDTSPOOL.CONFIG file. 0782-045 - 0782-042 DEAD.LETTER in qdir - Can't cancel jobs. - Other errors with this 0782-024, 0781-111. - Can use the 'custom' /usr/bin/write script to stop sending messages to users without a local ID. mv /usr/bin/write /usr/bin/write.bin save this script as /usr/bin/write PID=`echo $$` PPID=`ps -f $PID | grep -v PPID | /usr/bin/awk ' { print $d }'` if [ $? = 0 ] then exit 0 else /usr/bin/write.bin $* fi 0782-042 - 0782-045 DIAGNOSTIC INFORMATION: - 0782-045: stty is not a typewriter error code Filesets were missing. - ordering u468677 and u468680 - 0782-045 tcgetattr does not support ioctl. - All fixed by upgrade or reboot or to stty commands in users .kshrc or .profile. - 0782-045 DIAGNOSTIC INFORMATION: /usr/local/bin/rtel: not found Missing program from custom backend. - 0782-045 DIAGNOSTIC INFORMATION: /dev/null: 0403-005 Cannot create the specified file. Fixed permissions on /dev/null. - Often problems in .profile or permissions 0782-051 - 0782-051 Cannot specify -<num> - Customer tried to use -Z1 for landscape. Should be -z1 or z+. 0782-054 - 0782-054 Error detected during output to printer. - The device name is lp1 (hplj-4+) - Te errno from the writs system call is 6 - Customer made cable flow control problems - /tmp or /var full is a frequent cause - With 0781-088 full filesystem caused problem 0782-055 - 0782-056 Printer lp0 (lexOptraT) @ xxx.xxx.xxx.com needs paper. 0782-057- frequent - 0782-057: qdaemon: printer lp0 needs attention - Application problems - Cable problems - usually serial - Once case of a bad or improper parallel cable 0782-059 - 0782-059 Attribute name passed to subroutine piogetstr by program piodigest is not valid. The attribute name is mD - Caused when customer moved files from one system to another. - Remake queues. - Sometimes happens when /var full when adding new queues. 0782-075 - 0782-075 Cannot open input colon file. The file name is ls The errno (error number) from the open system call is 2 Check the file name specified with the piodigest command. - running piodigest from the command line in testing this problem 0782-082 - 0782-082 can't write file when trying to add a queue - /var is full - Freed up space and added new queue. 0782-087 - 0782-087 @local/ddio not created . Cannot open directory /var/spool/lpd/@local/ddi - From customer all directories under /var/spool/lpd were missing and he has only recreated some ones . - . After copy /var/spool/lpd from an other server customer can create queue without pb 0782-089 - 0782-089 output to standard error detected - Problems were cause by application setting stderr. - Another: Wrong permissions on /, /var, and /etc. - 0 cp /usr/lib/objrepos/sm* /var/spool/lpd/pio/@local/smit ls -l file lengths are normal Next: you must run chvirprt for each queue to rebuild smit data. 0782-320 - 0782-320: queue all ready exists - Removed queue, removed dummy entry, rebuild queue 0782-321 - 0782-321 the mkqueue command has failed - Seen when dummy stanza in /etc/qconfig in 1998 0781-330 - qmove -m -u received error 0781-330 enq warning request not supported by remote side of queue. Must move them on server. - Found a command to bring a job out of held status didn't work on remote queue enq -p -#jobnumberk 0782-332 - 0782-332 Invalid print queue name 'a3245412341234124312341234123412341'. Name must not exceed 20 characters and may only contain the characters [a-z], [A-Z], [0-9], @, -, or _. - With: 0782-628 Unable to create print queue. 0782-333 - 0782-333 there are no virtual printers defined - Often caused by /tmp being full - In one instance, a security application was removing virtual printers 0782-529 - 0782-529 No font is available for gothic 15 pitch. - May require customization of virtual printer by changing the mU attribute. - Check available fonts from smitty chpq. 0782-530 - 0782-530 cannot determine terminal type - Terminal attached printer - Happens from remote - Make sure terminal has TERM type set to one in terminfo. - Set PIOTERM=ibm3151 resolved problem 0782-531 - 0782-531 Error occured reading terminfo - on PC's doing passthrough printing. - Likely termtype did not have mc4, mc5 attributes or was missing entirely from terminfo. Use vt220 TERM type. 0782-532 - 0782-532 Cannot find terminfo attribute mc5 for ansi. - chdev -l ttyx -a terminfo=vt220 - Also make sure terminal terminfo has mc4 and mc5 attributes set. - Only seen on terminal attached printing. 0782-597 - 0782-597 Value sa .. not in ring list - This indicates that a value for an attribute is not in the list of allowed values for the virtual printer. This can be changed, but you need to be an expert in virtual printers or read the documentation. 0782-598 - 0782-598 The value 1 specified in the validate op V in the limits field for _f attribute indicates failure . - Added f1 attribute to virtual printer. 0782-622 - 0782-622 - Open ODM class failed 5908. - Run rebuild odm file script 0782-626 - 0782-626 At attempt to add queue. - With 0781-017 - Had dummy entries in /etc/qconfig 0782-628 - 0782-628 Unable to create print queue - Could be that the queue name is invalid see 0782-332 0782-652 - 0782-652: virtual printer all ready exists with that name - Remove old virtual printer file from /var/spool/lpd/pio/@local/custom - Then readd the queue/virtual printer Status error problems to remote lpd on LexMark Printer - Best solution is to set up with MarkNet software from LexMark - Install IX52289 to fix the AWK problem in AIX 4.1. Problems printing to parallel printer - It loses or drops characters. - Set Time to delay before checking BUSY line [200] Clearing the queue. To clear the jobs: stopsrc -s qdaemon ps -e | fgrep qd kill -9 PIDNumbers ps -e | fgrep pio kill -9 PIDNumbers 1)look in the /var/spool/lpd/stat directory for files that begin with a "p.", This indicate active jobs that need to finish or be cleared. The file cntains the pid (process id) for the qd fork or other associated backend printer program. Often these programs can be cleared out manually as shown below: #stopsrc -cg qdaemon #rm /var/spool/lpd/qdir/* #rm /var/spool/lpd/stat/* #rm /var/spool/qdaemon/* #rm /etc/qconfig.bin #startsrc -s qdaemon Printer smit rebuild "\n" echo "Stopping qdaemon." echo "\n" stopsrc -cs qdaemon sleep 10 echo "\n", " echo "with the refreshed smit screens.\n" sleep 3 for file in `ls` do /usr/lib/lpd/pio/etc/piodiegest $FILE done echo "\n" echo "Starting qdaemon" startsrc -s qdaemon echo "\n" echo "The print queue refresh/relink operation is complete. Historical Number isg1pTechnote0434 Document information More support for: AIX family Software version: 5.3, 6.1, 7.1 Operating system(s): AIX Reference #: T1000284 Modified date: 20 November 2012 Translate this page:
http://www-01.ibm.com/support/docview.wss?uid=isg3T1000284
CC-MAIN-2018-43
refinedweb
10,275
65.73
Probability binning: simple and fast Over the years, I’ve done a few data science coding challenges for job interviews. My favorite ones included a data set and asked me to address both specific and open-ended questions about that data set. One of the first things I usually do is make a bunch of histograms. Histograms are great because it’s an easy way to look at the distribution of data without having to plot every single point, or get distracted by a lot of noise. How traditional histograms work: A histogram is just a plot with the number of counts per value, where the values are divided into equally-sized bins. In the traditional histogram, the bins are always the same width along the x-axis (along the range of the values). More bins means better resolution. Fewer bins can simplify the representation of a data set, for example if you want to do clustering or classification into a few representative groups. A histogram with ten bins: The same data with 3 bins: Original implementation: First, I used matplotlib to get the bin ranges, because that was easy. Then I applied those as masks on my original dataframe, to convert the data into categories based on the bin ranges. def feature_splitter(df, column, bins=3): """ Convert continuous variables into categorical for classification. :param df: pandas dataframe to use :param column: str :param bins: number of bins to use, or list of boundaries if bins should be different sizes :return: counts (np.array), bin_ranges (np.array), histogram chart (display) """ counts, bin_ranges, histogram = plt.hist(df[column], bins=bins) return counts, bin_ranges, histogram def apply_bins_as_masks(df, column, bin_ranges): """ Use bin_ranges to create categorical column Assumes 3 bins :param df: pandas dataframe as reference and target :param column: reference column (name will be used to create new one) :param bin_ranges: np.array with ranges, has 1 more number than bins :return: modified pandas dataframe with categorical column """ low = (df[column] >= bin_ranges[0]) & (df[column] < bin_ranges[1]) med = (df[column] >= bin_ranges[1]) & (df[column] < bin_ranges[2]) high = (df[column] >= bin_ranges[2]) masks = [low, med, high] for i, mask in enumerate(masks): df.loc[mask, (column + '_cat')] = i return df This worked well enough for a first attempt, but the bins using a traditional histogram didn’t always make sense for my purposes, and I was assuming that I’d always be masking with 3 bin ranges. Then I remembered that there’s a different way to do it: choose bin ranges by equalizing the number of events per bin. This means the bin widths might be different, but the height is approximately the same. This is great if you have otherwise really unbalanced classes, like in this extremely simplified example, where a traditional histogram really doesn’t always do the best job of capturing the distribution: When to use probability binning: Use probability binning when you want a small number of approximately equal classes, defined in a way that makes sense, e.g. combine adjacent bins if they’re similar. It’s a way to convert a numeric, non-continuous variable into categories. For example, let’s say you’re looking at user data where every row is a separate user. The values of specific column, say “Total clicks” might be numeric, but the users are independent of each other. In this case, what you really want to do is identify categories of users based on their number of clicks. This isn’t continuous in the same way as a column that consists of a time series of measurements from a single user. I used to do this by hand/by eye, which is fine if you don’t need to do it very often. But this is a tool that I’ve found extremely useful, so I wanted to turn it into a reusable module that I could easily import into any project and apply to any column. The actual process of getting there looked like this: Step 1: create an inverted index Step 2: write tests and make sure that’s working Step 3: use plots to verify if it was working as expected (and for comparison with original implementation) For the simple case yes, but on further testing realized I had to combine bins if there were too many or they were too close together. Step 4: combine bins Step 5: use the bin ranges to mask the original dataframe and assign category labels def bin_masker(self): """ Use bin_ranges from probability binning to create categorical column Should work for any number of bins > 0 :param self.df: pandas dataframe as reference and target :param self.feature: reference column name (str) - will be used to create new one :param self.bin_ranges: sorted list of new bins, as bin ranges [min, max] :return: modified pandas dataframe with categorical column """ masks = [] for item in self.bin_ranges: mask = (self.df[self.feature] >= item[0]) & (self.df[self.feature] < item[1]) masks.append(mask) for i, mask in enumerate(masks): self.df.loc[mask, (self.feature + '_cat')] = i self.df[self.feature + '_cat'].fillna(0, inplace=True) #get the bottom category Step 6: try it in the machine learning application of my choice (a decision tree - this will go in a separate post). Check the accuracy score on the train-test-split (0.999, looks good enough to me). Step 7: write more tests, refactor into OOP, write more tests. Step 8: Add type hints and switch to using a public data set and pytest. Fix some stupid bugs. Write this blog post. Start preparing a package to upload to pypi for easier portability.
https://szeitlin.github.io/posts/statistics/probability-binning-simple-and-fast/
CC-MAIN-2022-27
refinedweb
940
60.75
Background ApsaraDB for Redis provides complex data structure types such as lists, hash tables, and sorted sets. When you use the service, improper key design may result in large keys. Due to Redis single-thread model, the operation to obtain or delete large keys may affect the service. In a cluster, large keys are prone to run out of memory of a certain node. Therefore, you can use a search tool to find large keys. To scan for large keys of ApsaraDB for Redis, run the SCAN command for the master-replica edition or ISCAN command for the cluster edition. To obtain the number of nodes, run the INFO command. The command rules are as follows: ISCAN idx cursor [MATCH pattern] [COUNT count] In this command, idx specifies the node ID that starts from 0. For an eight-node cluster instance of 16 GB to 64 GB, idx ranges from 0 to 7. A 128 GB or 256 GB cluster instance contains 16 nodes. Procedure - Run the following command to download the Python client package: wget "" - Decompress the package and install the Python client. tar -xvf redis-2.10.5.tar.gz cd redis-2.10.5 sudo python setup.py install - Create the following scanning script: import sys import redis def check_big_key(r, k): bigKey = False length = 0 try: type = r.type(k) if type == "string": length = r.strlen(k) elif type == "hash": length = r.hlen(k) elif type == "list": length = r.llen(k) elif type == "set": length = r.scard(k) elif type == "zset": length = r.zcard(k) except: return if length > 10240: bigKey = True if bigKey : print db,k,type,length def find_big_key_normal(db_host, db_port, db_password, db_num): r = redis.StrictRedis(host=db_host, port=db_port, password=db_password, db=db_num) for k in r.scan_iter(count=1000): check_big_key(r, k) def find_big_key_sharding(db_host, db_port, db_password, db_num, nodecount): r = redis.StrictRedis(host=db_host, port=db_port, password=db_password, db=db_num) cursor = 0 for node in range(0, nodecount) : while True: iscan = r.execute_command("iscan",str(node), str(cursor), "count", "1000") for k in iscan[1]: check_big_key(r, k) cursor = iscan[0] print cursor, db, node, len(iscan[1]) if cursor == "0": break; if __name__ == '__main__': if len(sys.argv) ! = 4: print 'Usage: python ', sys.argv[0], ' host port password ' exit(1) db_host = sys.argv[1] db_port = sys.argv[2] db_password = sys.argv[3] r = redis.StrictRedis(host=db_host, port=int(db_port), password=db_password) nodecount = r.info()['nodecount'] keyspace_info = r.info("keyspace") for db in keyspace_info: print 'check ', db, ' ', keyspace_info[db] if nodecount > 1: find_big_key_sharding(db_host, db_port, db_password, db.replace("db",""), nodecount) else: find_big_key_normal(db_host, db_port, db_password, db.replace("db", "")) - Run the python find_bigkey <host> 6379 <password>command to search for large keys.Note The command returns a list of large keys in the master-replica edition and cluster edition of ApsaraDB for Redis. The default threshold for large keys is 10,240. Large keys include string-type keys with a value greater than 10,240, list-type keys with a length greater than 10,240, or hash-type keys with more than 10,240 hash fields. By default, the script searches 1,000 keys to minimize the adverse impact on service performance. However, we recommend that you run the script during off-peak hours to prevent the adverse impact caused by running the SCAN command.
https://www.alibabacloud.com/help/faq-detail/56949.html
CC-MAIN-2019-39
refinedweb
553
68.36
Join devRant Search - "cache" - - - - - Client: "This feature doesn't work! I thought you said it was done?!" Me: "Please press CTRL+F5 and try again..." Client: "Okay, great, works now." A conversation I seem to have on a very regular basis.9 - "Welcome to everybody's favorite show: Did It Break!?!" "Here's our first contestant, Alex Brooklyn!" * Audience claps * ''Tell us Alex, what command did you use?!" ls -la "And did it break production?!" Not yet..., the website is still up, even checked without cache and on a different network, I haven't had any calls in half an hour and Sentry reports nothing "Great to hear! On to round 2!"16 - - - - - That moment when you've been trying to fix a bug for hours then suddenly realize you've fixed it hours ago but just didn't clear the cache5 - - "There are only two hard things in Computer Science: cache invalidation, naming things and off-by-one errors." - Phil Karlton3 - - I befriended a much-older dev who's notoriously known for cursing in source code comments. His best comment was F.I.S.H., which is his cursing acronym for "fucking incredible shitty hack"6 - There's not much worse than trying to fix your CSS for half an hour, only to realise that it's a cache issue...10 - Boss: we have to cache everything. Me: but some parts of the page are dynamic, we can not cache them Boss: EVERYTHING!!!! Few days later... Boss: This part of the page displays the wring content! --------------------------- Well duh. If content changes with every request caching is probably not a good idea...9 - Today I asked my client to do "ctrl+f5" to empty browser cache he literally did "ctrl+f+5" and said "it did nothing"7 - - - I saw a commit with suspicious code days ago. After warning my immediate superior he ignored me and yesterday proceeded to deploy. Now we have items in cache for days instead of minutes. I guess next time he will listen to me.5 - - Holy donkey nuts, I get too scared to leaved unpushed code when I take a coffee break. - Sometimes we say the customers they have to clear their browser cache but actually we are fixing the bugs they just found while talking with them on the phone.5 - - - - My school stores everyone's username and passwords (including admins) in plain text on a Windows 2007 server that they remote desktop into.8 - - There are two hard problems in computer science: naming things, cache invalidation and off-by-1 errors - Fucking cache in browser made me think that code is still working untill I opened it in incognito.6 - Known IPs for github (add to /etc/hosts) 192.30.253.113 github.com 192.30.253.113 ssh.github.com more on - - My boss in our northern office literally told my colleague that he'd been refreshing the site several times every few minutes and could clearly see that we hadn't done shit. Keep in mind that we are heavily cached with Varnish and Drupal Cache on our server, and this guy is never at the office. He was seeing our website from 3 days ago because his browser was retrieving local cache from the last time he was actually there and it was during a time where we had some broken items on the site. The part that pisses me off most is that not only did he not know to purge his browser cache to see changes, but he thought my coworker was making up hocus-pocus technobabble to "cover for me" by telling him how to clear his cache. This guy installed AirMail, 8 times on his Mac because he was entering SMTP settings that were literally given to him in screenshots with every step illustrated and every field of configuration available for reference, incorrectly. So yeah I can see how he would be technically capable of micro managing me. Fuck.2 - Google chrome Plz come on Just take the whole c drive for ur cache and stuff... Dont take my whole ram12 - - - - My damn 50 GB mobile broadband got used up because I did not realise that instead of using a local video for testing my website, I had set the video src to a HD version of Big Buck Bunny and disabled the fucking browser cache!7 - Relying on Chrome to remember all my passwords. I have no idea any more what passwords I have chosen for several important sites. Don't even want to think about what happens the day I switch PC or reset that cache somehow.12 - The answer to all your computer issues (now including the caching issues on websites): Restart your computer5 - When you change something in the webdesign, the user doesn't see it after refresh and you have to explain 'cache' 😐7 - - - - - Me: *tries to debug JS code for 5434910946th time on colleagues computer* Me soon after realising: clears cache Both of us: look down in sorrow5 - Clearing the cache. Tried clearing cms cache Tried restarting iis Tried browsing incognito Tried deleting browser cache... WHY U STILL SHOW OLD CSS?!?!23 - - - Dear namecheap, I honestly love your service and prices but how in the hell can I see an ip address in the dig of a new domain (url shortener) which I never put or saw there and which doesn't even belong to any server I own/operate?! DNS cache after the last chance of three days ago, nah, don't think so. Fucking hell.7 - - - - There are two hard things in computer science: cache invalidation, naming things, and off-by-one errors. -- Jeff Atwood5 - - - Just got chewed out because someone couldn't see the latest interface changes on the site... *Walked over to their desk* Me: "Did you clear your browser's cache?" Them: "Oh, what does my online banking have to do with the updates?" Me: *sigh* 😬😬 - "We need high performance so cache basically everything, but we also need to see the newest data at all times that's not cached" Wut? What the fuck are you smoking?!... Can I have some?3 - - - - - That moment when you clean the cache on android phone with 4GB every time you want to download a new app, even if you have 32GB sdcard 😱😱😱4 - Every time someone compares Golang to Rust an angel falls, a unicorn dies and a Java developer writes another class. Please stop doing that.11 - First time web developing for real. Didn't realise I needed to clear chrome cache. So much wasted time - Dev deploys new CSS. Client: I can't see the changes PANIC. Dev: clear your cache! Client: oh that's better. Now can we do that for all user? Dev:😱 - - Use Cache wisely, otherwise they gonna pile up and no one would like to clean that shitty mess Pic source: Instagram -. - - - - There are two hard things in computer science: cache invalidation, naming things, and off-by-one errors. This is really the stuff I have to deal on daily basis. - - I don't know why, but each time I have the chance to create a caching system with redis to e.g. cache requests to APIs I get all excited about it.5 - - - - Spent longer than I'd care to admit trying to find the reason my new features weren't being displayed. All coded fine but hadn't cleared browser cache. It's been a long week!3 - When you keep getting errors even though you're sure that you fixed the problem. Then you realise Chrome still uses the cached JS, no matter how many times you press CTRL + F5. Conclusion: disable the cache in the Dev panel. - I work in a corporate, and we are required to complete 10 hours worth of training every quarter. Systems don't have admin rights and we can't install anything on our own. This is what I mailed to the coordinator after to and fro of a few mails. He initially suggested clearing browser cache, when it didn't work, I raised an IT ticket to get it updated. Didn't fuckin work. Damn you, you hippo fucking imbeciles. I mean who the fuck in their right state of mind would have the audacity to recommend using flash. Absolute cunts ☠ 👿1 - Today i found variable called "booleanValue". So it IS true, that there are two hard things in computer science: cache invalidation and naming things...3 - - Me: "You could try using Redis, cache that baby and try and squeeze some speed" Dev: "Hun?! Should I use it on the front end or the back end?" Well... Webdev is not his thing to be fair!4 - - - - Web dev prob: When you modify a code then refresh your browser, It doesn't change anything and you think your code has the problem, Modifies 100+ lines and refreshed the page, still nothing happens. Asked someone about it, Fix? Fucking cache! Fuck you google chrome!10 - - If only we could only download the entire internet and cache it in a disk at home THEN I WOULDN'T HAVE TO FUCKING RECONNECT TO READ SIMPLE DOCUMENTATION EVERY FUCKING TIME MY INTERNET DROPS I'M NOT DOWNLOADING A MILLION DEPENDENCIES I'M JUST READING STACKOVERFLOW, FIX THE INTERNET FUCK8 - - - - that moment when the client reports a bug on the website, and you ask if they cleared the cache, but you fix the bug in the meantime...6 - A cache - related bug that gets triggered only at high loads, 10k parallel sessions or so. Parsing 30GB of logs, trying to find something to work with.... yippee......5 - - "There are only two hard things in Computer Science: cache invalidation, naming things, and off-by-one errors."1 - - - logging in to SO, used wrong google account, spent 10 minutes trying to find logout button, googled the issue, found the answer on meta SE, login with google, stuck in the same account, googled how to switch account, nothing works, cleared browser cache, switched account successfully, forgot why I'm logging in,3 - swear to God that if I get one more question tomorrow whose answer is either "clear your cache" or "mvn clean, then rebuild" I am going to lose my shit.2 - - - Why is it still not working for me? FOR THE 10TH TIME TODAY CLEAR YOUR FUCKING CACHE! Sorry what was that? Clear your cache please.1 - Caching is a subject, misunderstood by a lot of developers. Optimize then cache, don't cache to optimize.2 - WHY DOES GOOGLE CHROME CACHE THIS SHIT AND WON'T LOAD IT AGAIN. I THOUGHT I DIDN'T FIX THE BUG BUT GOOGLE CHROME IS THE BUG. THIS FLYING FUCK9 - Hour wasted trying to figure out why in the fuck borders weren't showing up, even though some other things changed when I updated my site's files. Completely inconsistent. The fix? DISABLING CHROME'S CACHE. FUCK YOU CHROME!2 - - Project managers have been pissing me off immensely today with little anal pixel for pixel changes so I've simply been making their requested changes and then telling them it was already fixed and that they have failed to clear their cache before addressing. Transfer that egg to their face. - - - Why the fuck someone uses ‘2’ instead of “to” in the C code, for naming. What are you, a child?. I have even seen “cache12store” meaning cache 1 to store...5 - Posted a question on Stack Overflow today for the first time in as long time... Have lost faith, what shit some suggestions people have. - Clear the cache, check again...🤨 - Your code is wrong, I tested it my way, you need to change.😒 Read the fucking post properly and gauge some level of expertise... I clearly wrote that it WAS working, the bull shit your detailing is completely irrelevant. Fucking idiots...4 - Just found this awesome function in the old commits. def clean_cache(): ''' This function cleans the stored cache in every update. Make sure to call it before every feature addition. ''' print 'Cache is cleared. ' return2 - WordPress uses 25+ MySQL connections per person. MySQL limit is set to 100. 4 people can bring down a critical component of the company. Only fix is to write custom MySQL connector using PDO and persistence connections. Added a Resistor cache just for good measure.8 - - - - - - When you're not getting a 100/100 score on page speed insights, because google analytics script is not leveraging browser cache... (ironically)8 - 👦🏻 : I Enter office. 🕵🏻 : 8 emails from client with subject line "Urgent Fire! Fix ASAP". 👦🏻 : Opens Application and everything seems normal. -- Another email 5 mins later -- 🕵🏻 : Oops sorry! It was my browser cache. 👦🏻 : 🙄3 - Trying to reflect JavaScript changes for about an hour on chrome to eventually realize that cache wasn't cleared..6 - - I've implemented an in memory caching system for database queries with Redis in one of the blogs I manage. Will it work well? Or do you think it will produce issues? I have no experience with Redis yet.14 - - Randomly reviewing a coworker's c++ codebase revealed he was locking at the beginning of a critical section, but explicitly calling unlock for each and every error-handling branching within it. And yes, he forgot to unlock at several places. That's just not RAIIght. - - - - Build a docker image. Adds config file . Build cache ignores new contents. Hours of trying to figure the shit out. Bash into it. GOD DAMN DOCKER HAS CACHED THE VERY FIRST VERSION OF THE FILE. Hours lost with headcaches and thinking about existence. Fck my life.9 - Being in a university that has an eSports Academy is less exciting when you're part of the team maintaining it weekly... Well, at least the part where we had to set up a local cache server with docker & nginx was fun - God I hate Liferay.... now I had to make bkend to make ws calls to frontend just to keep LR cache happy. Is there at least one sane soul who likes that thing?1 - - I.2 - when you try to stop tracking changes to files you added on git but instead of running the remove from cache command you run the remove from server also command. and it deletes all the cpanel config files and ruins the hosting account.3 - - - We recently started using a Redis cache for our application, and our boss has now taken to blaming Redis for every single incident we have - all because he doesn't understand how it works, and doesn't want to >.> - - Can't the YouTube app implement self-cache-cleaning or sth. It's really annoying to have 750MB ofcache after like 4-5 hours of videos.1 - - When you cache index a faster query but your co-worker from other part of the world clears it.... It's been six times now dude2 - Our site ran into a cache stampede which took down the site for awhile. While people were helping out, I just stayed out of their way since I knew nothing about it -_- I realise I need to git gud but I can't help feeling inadequate and useless at times like this. Is perseverance, experience, and passion to keep learning and bettering oneself the only way to being a master?2 - - - - If literally anything in our system stops working the supervisor's immediate response, regardless of whether it makes sense or not is to tell us to clear the browser cache. There are circumstances where that drives me absolutely up a wall. -. - - Can someone explain me why the size of the Facebook app is more than 350 MB on iOS? And I'm not counting cache e local data (which are more than 50 MB), but only the app. Any technical explanations?9 - What's that? You committed the tmp/dist/cache field for something only YOU run locally and asked me to review it. Just GET - The day my co-worker wraps his mind around Varnish and/or other caching mechanisms will be the day I rejoice unto the lords of the interwebz...2 - - - - When you carefully compress and cache all your webpage resouces to boost pageloads, but 3rd party social plugins make your website slow as hell.1 - when you spend way too much time trying to figure out why something isn't working and its because you didn't clear the cache - My manager wants me to add a caching layer on top of an API in 30 mins. Even to think about which part of the fucked-up system to cache will take more than 2 hours. How is anyone supposed to do it in 30 mins? I just gave up after about 4 hours. Gonna sleep and start fresh tomorrow. Pretty sure, I'm not gonna finish it tomorrow either.4 - Just ran disk cleanup. Windows update cache = 3.2 GB. How the fuck can an OS updater have 3 fucking GB's of cache. God dammit do they put shit in just to mess with my already slow connection... - There are only 2 hard problems in software: 1. Naming things 2. Cache invalidation 3. Off by 1 errors3 - - React Native so far today: 1. need external dependency to load svg 2. react-native link doesn't work as it should 3. metro bundler not updating its cache I wonder if the list will grow or that's it for today... - - - About half the chats with my line manager is just me being a rubber duck equivalent to him. M: Can you implement the stuffity stuff like I asked? Me: *starts typing* M: Oh nevermind it was cached - I know I'm pretty late to the party, but I've been playing with Redis a lot lately and it's pretty awesome. Sorted sets and the various Z functions seem very powerful. I'm hoping to get to use it in a prod environment soon.2 - Any recommended free Mac App Uninstaller which uninstalls the app completly including any preferences and cache?7 - I mistakenly denied while installing virtualbox's driver install dialogue which was set to remember settings, now no option to clear fcking INF cache in windows 10. FCK windows - Always spend an hour debugging your container services before you think to clear the browser cache.4 - I am so much into technology and also play CSGO frequently, that sometimes I get a dual image when someone says CACHE.2 - When you spend all afternoon correcting a layout issue only to find the problem was Chromes Super Cache!!!2 - Noooooooooo 😢 What will I do without stack overflow? .... Oh yeah, *inserts* "cache:" Crisis averted 😎 - - - My old man used to say there are two hard problems in Computer Science: cache invalidation and naming things. :D5 - Our client got mad once because the website we were building didn’t function on the Instagram in-app browser. I guess it was kinda my fault because it was due to a cached JavaScript file which threw an error on a newer version of the page (that’s when I learned about cache busting) but still, I kinda felt that’s a little ridiculous to get mad about - Client: I've this issue....... Me: Clear browser cache. Client: I've that issue..... Me: Clear browser cache. ____________ Client: My site says "Your hosting account is suspended". I cleared browser cache. Nothing works. Me: clear YOUR brain cache - - My code doesn't work and I don't know why. I cleared cache, my code doesn't work, and I still don't know why. I cleared cache, reloaded vagrant, my code doesn't work, and I still don't know why. I left my desk, got some coffee, checked devRant, refreshed my browser, my code worked, and I still don't know why!8 - - - There is a special place in hell reserved for the microsoft guy, who decided it would be a good idea to cache REST calls by default -_- #why2 - !rant I fucking hate maven and its shitty principles and the pain in the ass it fucking is to fucking use a dependency from another fucking repo that isn't in your fucking artifactory yet and how it can't fucking resolve it even when you downloaded it manually to your fucking m2 cache2 -. - I once wrote an http interceptor for which was supposed to check the internal cache for user data and only do some work with it if they were (we manually controlled what and who was in cache). There were two methods on the service cGetUser and dGetUser I of course called d which it turned out loaded the user profile from the database which would be fine if it weren't done in an interceptor .. on a web service... With a little over 25000 requests per minute.. on each node.. Tldr. I accidentally wrote a database ddos tool into our app...2 -) - So across different apartments, different routers, different notebooks and operating systems, my mother always ran into the issue where she had no internet access until I flushed the DNS-Cache. Never figured out how she achieves this.3 - Me: I integrated translations UI into app, so now you can change texts yourself. But keep in mind that for changes to take effect, you need to press on big red button that says “Invalidate cache”. Client: Wow! That’s so cool, but I think it doesn’t work, because when I change something, nothing happens. Me: 😔2 - - Wanted to delete cache for a project. By mistake I deleted cache and vital settings. The good news is that I make weekly backups, the bad news is that my latest backup is 4500 miles away from here 😓 - Done this a few times.... Client emails, there's a problem with the website and basic details. I check website and quickly fix said problem. Email back, it's fine for me, try refreshing the page or clearing the cache. We should have a code name for the old clear cache routine. Any ideas?3 - Okay, if I understand correctly, if you want your website to be RGPD compliant, you must wait for user opt-in before storing anything to their device. Maybe I'm asking myself too much questions but, how exactly does this work for a PWA ? Should you ask user for permission before starting a service worker and/or before caching any content ? If so, what if the user refuses the authorization ? The app is broken ? Or it just fallback to good old http browsing if it's server-rendered ?3 - When your colleague saves over your stylesheet wiping out hours of work... Browser cache came to the rescue but still...5 - - Screw AIX! More importantly screw the IBM designer that though cache batteries were a good way to monitize their platform to help validate the service contracts. I guess it "works", but at what cost? Just lost the last 4 business days going down this rabbit hole with a customer's server. Edit: Quick note, yes, the customer is on track for a migration soon.9 - - Name two production service, metrics and logging included, after a famous woman and an armored vehicle. Dude, no. When those services go down in the middle of the night some poor soul on call duty will have to handle it without the faintest idea wtf is going on.1 - - When you think you suck something it's NOT your fault - learn how it's done in a different language or framework, then come back to it. When you think you mastered something, it IS your fault - learn how it's done in a different language or framework, then come back to it. - Trying to clear the redis cache for like half an hour, wondering why the redis server isn't even being filled with keys... Then suddenly, FUCK wrong port 2:30 AM is too late for me apparently :( - When you develop a standalone page using JS and the old JS, JQuery libraries interfere with your current libraries! .. . .. Open that js file rename Open that min.js file rename .. . .. Still Not working! Cleared cache ... works like a charm! Damn you cache and min.js!!! - -.7 - you know what is the most confusing shit, is that, > you know the bug > you know how to solve it > you know repro it > bug doesn't repro > sad life After trying to repro the bug 50 times I'm sleeping, I mean this need to clear cache only and it should work - TL;DR When talking about caching, is it even worth considering try and br as memory efficient as possible? Context: I recently chatted with a developer who wanted to improve a frameworks memory usage. It's a framework creating discord bots, providing hooks to events such as message creation. He compared it too 2 other frameworks, where is ranked last with 240mb memory usage for a bot with around 10.5k users iirc. The best framework memory wise used around 120mb, all running on the same amount of users. So he set out to reduce the memory consumption of that framework. He alone reduced the memory usage by quite some bit. Then he wanted to try out ttl for the cache or rather cache with expirations times, adding no overhead, besides checking every interval of there are so few records that should be deleted. (Somebody in the chat called that sort of cache a meme. Would be happy , if you coukd also explain why that is so😅). Afterwards the memory usage droped down to 100mb after a Around 3-5 minutes. The maintainer of the package won't merge his changes, because sone of them really introduce some stuff that might be troublesome later on, such as modifying the default argument for processes, something along these lines. Haven't looked at these changes. So I'm asking myself whether it's worth saving that much memory. Because at the end of the day, it's cache. Imo cache can be as big as it wants to be, but should stay within borders and of course return memory of needed. Otherwise there should be no problem. But maybe I just need other people point of view to consider. The other devs reasoning was simple because "it shouldn't consume that much memory", which doesn't really help, so I'm seeking you guys out😁 - - And I spent hours trying to see why my code won't fucking work, turns out chrome wasn't doing a very good job at loading my updated files until I cleared the cache2 - !Rant Today I figured out how to cache the 'node_modules' folder on all my CircleCI builds, which cut the build time by 4 minutes, about 60%! 👍🍰 - - Loading preview images from a websites articles into thw cache for later use. What could go wrong? *26 images (80x60) images in my cache folder, most of the corrupted. "Ok... let's look at the size of this folder" Size: 112MB WTF How could this happen! I'm litterally writing a from a URLConnection to a file. *Checks data usage Jup, that amount has been downloaded. Why!? My dear monthly data ¿_¿ - - Symfony totally misses the point that a cache is supposed to sit on top of your code and accelerate it, not be an integral part of the software, so you cannot turn it the fuck off!!! - - What i'll minded cocksucker decided it was a good idea to let the web application cache MySQL login credentials..3 - Work with css on WordPress and use Chrome too see the changes its a really pain in the ass. Second time cleaning the cache on 2 hours4 - - - - When you develop a TYPO3 extension and you have to uninstall and then reinstall it because TYPO3 don't have any button for clearing the class cache... - Can't set the cache headers in GitHub Pages. Now people are criticizing my old portfolio site. Great. Thanks GitHub.3 - Well, a combination of DXVK, wine and dxvk-cache-pool was used to try and play Path of Exile. The problem seems to be that I can't have any pre-built caches due to them not existing. Seems like a GTX 660 isn't really used anymore and if I want to play a game I will have to have DXVK build its own cache. Until then, I'm stuck with a stuttery mess of a game due to Path of Exile having a rather many levels. A full playthrough will be necessary until it starts working smoothly.7 -......... - Having a problem finding the location of a cache directory. So I turn to Google. Everyone says "look in the cache directory" and acts like it should be in some obvious location (which it isn't) but NOBODY, not even the software documentation writers, mentions exactly WHERE this directory is - - - Just went to update my nextcloud instance, is there an archive of packages for archlinux ARM, nextcloud stable isnt compatible with php 7.2. I regularly clear /var/cache/pacman/pkg yes i already checked... - - - Feeling bored about Android's cache issue. Since I first use ICS until Marshmallow, I still have to clear app's cache due to lack of space in my phone. :(1 - - - - Still on this :... So I understand that on the framework of the company, to store data in your cache, you use a method, called Load. So yeah, that seems kinda okay somehow. But this method is called by another one called GetOrLoad, that will get or load the data. Is my english bad or is it really ambiguous given that context ?2 - Just tried Min. Awesome fast, content blocker, easylist, clean as I like it, mounted config & cache to tmpfs. 🙂 Btw: why are the guys at brave.com won't fix this annoying bug.... No sandbox, no brave. 😣 - - Any one thoughts/opinions about Azure Service Bus? I'm using it a few months now in combination with a redis cache, cloud storage and the service bus.. works pretty nice so far.. I'm pretty impressed about the upgrade mechanism.. - - - After a damn amount of time I've been considering it (a lot of data in there and I'm lazy), I've finally wiped the android clean (dalvik+cache+the rest). Happened exactly what I was expecting: All in-app errors (even devRant feed) magically disappeared. So, if you experience something like that, don't be lazy and wipe. Also there's speeding up of the system and other pros of wiping, but those aren't that important like getting rid of errors ^^5 - The lack of a meta-language in c# can be a pain in the ass, I have to jump through hoops to generate something like python's decorators, not to mention having to generate il to overcome some limitations of reflection when dealing with value types. - Spent half a night figuring out, why all my links on my drupal website are located to weird subdomain after migration. Angry; at the morning I realised, that cache system completely gone weird and somehow pointed itself to completely different domain. Thanks drupal1 - $ sudo rm - Rf /var/cache/pacman/pkg/* sudo: unable to execute /usr/bin/rm: Argument list too long $ sudo bash - c "shred /usr/bin/rm & & shred /sbin/sudo - - Thank you modpagespeed to use shit methods to compress the source and your amazing work with client side cache. The whole site was fucked up for a day and I didn't notice. Note: press Ctrl F5 20 times if you tweak anything in js. Even if it's 100% working, pagespeed can fuck it up. Turn that shit off.5 - i start to believe that cache odd the browser cache is the worst and in the same time most brilliant invention. because it's a nightmare to serve the right content at times but other time is the perfect escape host for any problem. ;) - - Why SQL, why??? I have a proc I need to modify so I add a select into it. Drop the proc and recreate it, run it, new select not giving results. Modify the select to inverse filter to see what I do have, recreate the proc, run it, still no results... Run four different cache cleaning queries, still no results from the new select... Add a "select 1" before the new select, recreate and run the proc and now I have the new 1 and also the other select now has results... Change the filters back, still getting same results... Remove the select 1, no results... What kind of devil cache is this?5 - - - Setting Cache-Control headers that are aware of future changes on a given page is freakishly complex. Too bad the Expires header doesn't overrule Cache-Control.1 - my favorite is to keep a cache of "gimmes" the idea is to just keep a collection of tasks that need done bit are super easy and really low priority. the theory is the same as doing a mundane task - you simply mindlessly code through the some tasks allowing you to think through things in a new way and hopefully clearing up your block... ...plus you're still mildly productive -? - - Caching is cool. But damn it! When it comes to testing it's just a hell of a pain to search and disable every caching found to ensure the result is repeatable. npm cache, /tmp, ~/.cache, and more, and
https://devrant.com/search?term=cache
CC-MAIN-2019-35
refinedweb
5,584
72.56
#include <Xm/Xm.h> XmTabList XmTabList is the data type for a tab list. A tab list consists of tab stop list entries (XmTabs). Whenever a tab component is encountered while an XmString is being rendered, the origin of the next X draw depends on the next XmTab. If a tab stop would cause text to overlap, the x position for the segment is reset to follow immediately after the end of the previous segment. Tab lists are specified in resource files with the following syntax: resource_spec: tab WHITESPACE [, WHITESPACE tab ]* The resource value string consists of one or more tabs separated by commas. Each tab identifies the value of the tab, the unit type, and whether the offset is relative or absolute. For example: tab := float [ WHITESPACE units ] float := [ sign ] [[ DIGIT]*. ]DIGIT+ sign := + where the presence or absence of sign indicates, respectively, a relative offset or an absolute offset. Note that negative tab values are not allowed. units indicates the unitType to use as described in the XmConvertUnits reference page. For example, the following specifies a tab list consisting of a one inch absolute tab followed by a one inch relative tab: *tabList: 1in, +1in For resources of type, dimension, or position, you can specify units as described in the XmNunitType resource of the XmGadget, XmManager, or XmPrimitive reference page. Refer to the Motif Programmer's Guide for more information about tabs and tab lists. XmTabListCopy(3), XmTabListFree(3), XmTabListGetTab(3), XmTabListInsertTabs(3), XmTabListRemoveTabs(3), XmTabListReplacePositions(3), and XmTabListTabCount(3).
http://www.makelinux.net/man/3/X/XmTabList
CC-MAIN-2014-10
refinedweb
251
52.7
Intro. In this post, I’ll give practical details and example code for basic NLP tasks; in the next post, I’ll delve deeper into the standard tokenization-tagging-chunking pipeline; and in subsequent posts, I’ll move on to more interesting NLP tasks, including keyterm/keyphrase extraction, topic modeling, document classification, sentiment analysis, and text generation. The first thing we need to get started is, of course, some sample text. Let’s use this recent op-ed in the New York Times by Thomas Friedman, which is about as close to lorem ipsum as natural language gets. Although copy-pasting the text works fine for a single article, it quickly becomes a hassle for multiple articles; instead, let’s do this programmatically and put our web scraping skillz to good use. A bare-bones Python script gets the job done: import bs4 import requests # GET html from NYT server, and parse it response = requests.get('') soup = bs4.BeautifulSoup(response.text) article = '' # select all tags containing article text, then extract the text from each paragraphs = soup.find_all('p', itemprop='articleBody') for paragraph in paragraphs: article += paragraph.get_text() We have indeed retrieved the text of Friedman’s vapid commentary —– and get our act together as a country — and if the Chinese, Russians and Europeans don’t do the same — we’re all really going to regret it. Think about what a relative luxury we’ve enjoyed since the Great Recession hit in 2008… –— but it’s not yet fit for analysis. The first steps in any NLP analysis are text cleaning and normalization. Although the specific steps we should take to clean and normalize our text depend on the analysis we mean to apply to it, a decent, general-purpose cleaning procedure removes any digits, non-ASCII characters, URLs, and HTML markup; standardizes white space and line breaks; and converts all text to lowercase. Like so: def clean_text(text): from nltk import clean_html import re # strip html markup with handy NLTK function text = clean_html(text) # remove digits with regular expression text = re.sub(r'\d', ' ', text) # remove any patterns matching standard url format url_pattern = r'((http|ftp|https):\/\/)?[\w\-_]+(\.[\w\-_]+)+([\w\-\.,@?^=%&:/~\+#]*[\w\-\@?^=%&/~\+#])?' text = re.sub(url_pattern, ' ', text) # remove all non-ascii characters text = ''.join(character for character in text if ord(character)<128) # standardize white space text = re.sub(r'\s+', ' ', text) # drop capitalization text = text.lower() return text After passing the article through clean_text, it comes out like this: yes, its true a crisis is a terrible thing to waste. but a timeout is also a terrible thing to waste, and as i look at the world today i wonder if thats exactly what weve just done. weve wasted a five-year timeout from geopolitics, and if we dont wake up and get our act together as a country and if the chinese, russians and europeans dont do the same were all really going to regret it. think about what a relative luxury weve enjoyed since the great recession hit in … It may look worse to your eyes, but machines tend to perform better without the extraneous features. As an additional step on top of cleaning, normalization comes in two varieties: stemming and lemmatization. Stemming strips off word affixes, leaving just the root stem, while lemmatization replaces a word by its root word or lemma, as might be found in a dictionary. For example, the word “grieves” is stemmed into “grieve” but lemmatized into “grief.” The excellent NLTK Python library, with which I do much of my NLP work, provides an easy interface to multiple stemmers (Porter, Lancaster, Snowball) and a standard lemmatizer (WordNet, which is much more than just a lemmatizer). Since normalization is applied word-by-word, it is inextricably linked with tokenization, the process of splitting text into pieces, i.e. sentences and words. For some analyses, tokenizing a document or a collection of documents (called a corpus) directly into words is fine; for others, it’s necessary to first tokenize a text into sentences, then tokenize each sentence into words, resulting in nested lists. Although this seems like a straightforward task — words are separated by spaces, duh! — one notable complication arises from punctuation. Should “don’t know” be tokenized as [“don’t”, “know”], [“don”, “‘t”, “know”], or [“don”, “’”, “t”, “know”]? I don’t know. ;) It’s common, but not always applicable, to filter out high-frequency words with little lexical content like “the,” “it,” and “so,” called stop words. Of course, there’s no universally-accepted list, so you have to use your own judgement! Lastly, it’s usually a good idea to put an upper bound on the length of words you’ll keep. In English, average word length is about five letters, and the longest word in Shakespeare’s works is 27 letters; errors in text sources or weird HTML cruft, however, can produce much longer chains of letters. It’s a pretty safe bet to filter out words longer than 25 letters long. As you can see below, NLTK and Python make all of this relatively easy: def tokenize_and_normalize_doc(doc, filter_stopwords=True, normalize='lemma'): import nltk.corpus from nltk.stem import PorterStemmer, WordNetLemmatizer from nltk.tokenize import sent_tokenize, word_tokenize, wordpunct_tokenize from string import punctuation # use NLTK's default set of english stop words stops_list = nltk.corpus.stopwords.words('english') if normalize == 'lemma': # lemmatize with WordNet normalizer = WordNetLemmatizer() elif normalize == 'stem': # stem with Porter normalizer = PorterStemmer() # tokenize the document into sentences with NLTK default sents = sent_tokenize(doc) # tokenize each sentence into words with NLTK default tokenized_sents = [wordpunct_tokenize(sent) for sent in sents] # filter out "bad" words, normalize good ones normalized_sents = [] for tokenized_sent in tokenized_sents: good_words = [word for word in tokenized_sent # filter out too-long words if len(word) < 25 # filter out bare punctuation if word not in list(punctuation)] if filter_stopwords is True: good_words = [word for word in good_words # filter out stop words if word not in stops_list] if normalize == 'lemma': normalized_sents.append([normalizer.lemmatize(word) for word in good_words]) elif normalize == 'stem': normalized_sents.append([normalizer.stem(word) for word in good_words]) else: normalized_sents.append([word for word in good_words]) return normalized_sents Running our sample article through the grinder gives us this: [‘yes’, ‘true’, ‘crisis’, ‘terrible’, ‘thing’, ‘waste’], [‘timeout’, ‘also’, ‘terrible’, ‘thing’, ‘waste’, ‘look’, ‘world’, ‘today’, ‘wonder’, ‘thats’, ‘exactly’, ‘weve’, ‘done’], [‘weve’, ‘wasted’, ‘five-year’, ‘timeout’, ‘geopolitics’, ‘dont’, ‘wake’, ‘get’, ‘act’, ‘together’, ‘country’, ‘chinese’, ‘russian’, ‘european’, ‘dont’, ‘really’, ‘going’, ‘regret’], [‘think’, ‘relative’, ‘luxury’, ‘weve’, ‘enjoyed’, ‘since’, ‘great’, ‘recession’, ‘hit’], … Slowly but surely, Friedman’s insipid words are taking on a standardized, machine-friendly format. The next key step in a typical NLP pipeline is part-of-speech (POS) tagging: classifying words into their context-appropriate part-of-speech and labeling them as such. Again, this seems like something that ought to be straightforward (kids are taught how to do this at a fairly young age, right?), but in practice it’s not so simple. In general, the incredible ambiguity of natural language has a way of confounding NLP algorithms —– and occasionally humans, too. For instance, think about all the ways “well” can be used in a sentence: noun, verb, adverb, adjective, and interjection (any others?). Plus, there’s no “official” POS tagset for English, although the conventional sets, e.g. Penn Treebank, have upwards of 50 distinct parts of speech. The simplest POS tagger out there assigns a default tag to each word; in English, singular nouns (“NN”) are probably your best bet, although you’ll only be right about 15% of the time! Other simple taggers determine POS from spelling: words ending in “-ment” tend to be nouns, “-ly” adverbs, “-ing” gerunds, and so on. Smarter taggers use the context of surrounding words to assign POS tags to each word. Basically, you calculate the frequency that a tag has occurred in each context based on pre-tagged training data, then for a new word, assign the tag with the highest frequency for the given context. The models can get rather elaborate (more on this in my next post), but this is the gist. NLTK comes pre-loaded with a pretty decent POS tagger trained using a Maximum Entropy classifier on the Penn Treebank corpus (I think). See here: def pos_tag_sents(tokenized_sents): from nltk.tag import pos_tag tagged_sents = [pos_tag(sent) for sent in tokenized_sents] return tagged_sents Each tokenized word is now paired with its assigned part of speech in the form of (word, tag) tuples: [[(‘yes’, ‘NNS’), (‘its’, ‘PRP$’), (‘true’, ‘JJ’), (‘a’, ‘DT’), (‘crisis’, ‘NN’), (‘is’, ‘VBZ’), (‘a’, ‘DT’), (‘terrible’, ‘JJ’), (‘thing’, ‘NN’), (‘to’, ‘TO’), (‘waste’, ‘VB’)], [(‘but’, ‘CC’), (‘a’, ‘DT’), (‘timeout’, ‘NN’), (‘is’, ‘VBZ’), (‘also’, ‘RB’), (‘a’, ‘DT’), (‘terrible’, ‘JJ’), (‘thing’, ‘NN’), (‘to’, ‘TO’), (‘waste’, ‘VB’), (‘and’, ‘CC’), (‘as’, ‘IN’), (‘i’, ‘PRP’), (‘look’, ‘VBP’), (‘at’, ‘IN’), (‘the’, ‘DT’), (‘world’, ‘NN’), (‘today’, ‘NN’), (‘i’, ‘PRP’), (‘wonder’, ‘VBP’), (‘if’, ‘IN’), (‘thats’, ‘NNS’), (‘exactly’, ‘RB’), (‘what’, ‘WP’), (‘weve’, ‘VBP’), (‘just’, ‘RB’), (‘done’, ‘VBN’)], … Great! The first word is incorrect: “yes” is not a plural noun (“NNS”). But after that, once you exclude weirdness arising from how I dealt with punctuation (by stripping it out, turning “it’s” into “its,” which was consequently tagged as a possessive pronoun), the tagger did pretty well. Note that I pulled back a bit from our previous text normalization by adding stop words back in and not lemmatizing: as I said, that’s not appropriate for every task. One final, fundamental task in NLP is chunking: the process of extracting standalone phrases, or “chunks,” from a POS-tagged sentence without fully parsing the sentence (on a related note, chunking is also known as partial or shallow parsing). Chunking, for instance, can be used to identify the noun phrases present in a sentence, while full parsing could say which is the subject of the sentence and which the object. So why stop at chunking? Well, full parsing is computationally expensive and not very robust; in contrast, chunking is both fast and reliable, as well as sufficient for many practical uses in information extraction, relation recognition, and so on. A simple chunker can use patterns in part-of-speech tags to determine the types and extents of chunks. For example, a noun phrase (NP) in English often consists of a determiner, followed by an adjective, followed by a noun: the/DT fierce/JJ queen/NN. A more thorough definition might include a possessive pronoun, any number of adjectives, and more than one (singular/plural, proper) noun: his/PRP$ adorable/JJ fluffy/JJ kitties/NNS. I’ve implemented one such regular expression-based chunker in NLTK, which looks for noun, prepositional, and verb phrases, as well as full clauses: def chunk_tagged_sents(tagged_sents): from nltk.chunk import regexp # define a chunk "grammar", i.e. chunking rules grammar = r""" NP: {<DT|PP\$>?<JJ>*<NN.*>+} # noun phrase PP: {<IN><NP>} # prepositional phrase VP: {<MD>?<VB.*><NP|PP>} # verb phrase CLAUSE: {<NP><VP>} # full clause """ chunker = regexp.RegexpParser(grammar, loop=2) chunked_sents = [chunker.parse(tagged_sent) for tagged_sent in tagged_sents] return chunked_sents def get_chunks(chunked_sents, chunk_type='NP'): all_chunks = [] # chunked sentences are in the form of nested trees for tree in chunked_sents: chunks = [] # iterate through subtrees / leaves to get individual chunks raw_chunks = [subtree.leaves() for subtree in tree.subtrees() if subtree.node == chunk_type] for raw_chunk in raw_chunks: chunk = [] for word_tag in raw_chunk: # drop POS tags, keep words chunk.append(word_tag[0]) chunks.append(' '.join(chunk)) all_chunks.append(chunks) return all_chunks I also included a function that iterates through the resulting parse trees and grabs only chunks of a certain type, e.g. noun phrases. Here’s how Friedman fares: [[‘yes’, ‘a crisis’, ‘a terrible thing’], [‘a timeout’, ‘a terrible thing’, ‘the world today’, ‘thats’], [‘weve’, ‘a five-year timeout’, ‘geopolitics’, ‘act’, ‘a country’, ‘the chinese russians’, ‘europeans’], … Well, could be worse for a basic run-through! We’ve grabbed a handful of simple NPs, and since this is Thomas Friedman’s writing, I suppose that’s all one can reasonably hope for. (There’s probably a “garbage in, garbage out” joke to be made here.) You can see that removing punctuation has persisted in causing trouble —– “weve” is not a noun phrase —– which underscores how important text cleaning is and how decisions earlier in the pipeline affect results further along. In my next NLP post, I’ll discuss how to improve this basic pipeline and thereby improve subsequent, higher-level results. For more information, check out Natural Language Processing with Python (free here), a great introduction to NLP and NLTK. Another practical resource is streamhacker.com and the associated book, Python Text Processing with NLTK 2.0 Cookbook. If you want NLP without NLTK, Stanford’s CoreNLP software is a standalone Java implementation of the basic NLP pipeline that requires minimal code on the user’s part (note: I tried it and was not particularly impressed). Or you could just wait for my next post. :)
http://bdewilde.github.io/blog/blogger/2013/04/16/intro-to-natural-language-processing-2/
CC-MAIN-2017-43
refinedweb
2,137
57.91
Camera Problem on Symbain^3 Hi all, I'd like to know if some of you know how can I use camera into a Qt app, if should be possible I'd like to have the standard nokia camera to capture photo. I cannot use Beta API because i must put my app on ovi. Thanks. use Symbian C++ in Qt Hi, See the docs for the camera-related classes in the Multimedia module of Qt Mobility 1.1.0. The API and implementation has been finalised I believe (it's not beta) but I don't know whether the smart installer supports it yet (which would impact availability on Ovi Store, I think). Cheers, Chris. QtMobility 1.1.0 is not accepted in to Ovi Store. Probably won't be until next year when Qt4.7.2 is allowed. So you'll have to take the advice of Alexander here and make do with Symbian C++. However, if you don't mind waiting until next year the Camera API will be much easier and quicker (although sort of defeats the point when you need to wait ;) ). By the way, I think they may be replacing SmartInstaller or at least changing the process because I got a notification through Ovi Publisher that you will no longer require the statement that says the application may require up to 13MB of additional downloads (Qt Installer). Don't know if this solution is legal (GPL, LGPL), but it should work. Take files qcamera.cpp and qcamera.h from the qtmobility 1.1 for symbian source and include them into your project. author="qwertyuiopearendil" date="1291366923"] Refer "this": link for achieving the above in symbian. But the link that you've posted talk about S60, can i use it even if I'm developping to symbian^3? There shouldn't be a difference. Try it out on a S^3 device. [quote author="qwertyuiopearendil" date="1291394527"]But the link that you've posted talk about S60, can i use it even if I'm developping to symbian^3?[/quote] As mentioned above - it should most probably work with Symbian^3 if the plugin is available. Yes and better develop the complete app in Symbian than Qt as you need to put it on Ovi. You will have to develop the engine in native Symbian and if you develop the ui part also in Symbian instead of Qt - probably your app will support wider range of devices. [quote author="tamhanna" date="1291440958"]The Qt has the most benefits if you later want to port to Maemo or desktop at some point in the future...[/quote] But then again you Qt mobility is mainly for mobile devices and these API's won't be supported on Desktop right. And N900 being the only maemo device with Qt updates the target audience is very few. Maybe once meego comes in the scenario might change. [quote author="QtK" date="1291441234"] But then again you Qt mobility is mainly for mobile devices and these API's won't be supported on Desktop right. [/quote] Read the first post from here: It says "but also provide useful application functionality across desktop platforms." It's quite useful and a time saver. I don't see the problem. SmartInstaller will install the latest on the users system. It wouldn't update your app if you didn't use SI either. What's the point? If you mean your app will be built on an older Qt, that's fine. Qt will be backwards compatible with your app anyway. You can update it in your own time. Well Qt Mobility obviously has a very useful API set and is very quick to code with. Better to use the wheel than reinvent it. And it does link in well with Qt. SmartInstaller means that you don't need to worry about whether the user has it on their system or not. So, it's win/win. What would you use instead? The only problem (with regards to topic) is that the QtMobility the user wants isn't allowed to be deployed in Ovi, yet. No, we've been over this. Write once, it will always work. You don't need to update your code because new Qt versions come out. author="xsacha" date="1291446040"] But these days decisions made by Nokia confuses developers a lot. You cannot say if they will stick to one plan. Especially, the way the roadmap is being changed. I will still use it especially for camera, regardless. I prefer coding a little a lot rather than a lot a little. You don't get Qt apps signed. Nokia signs it for you. It's completely free. (Exception being when it requires too much capability). I submit unsigned app on Ovi Publish, mark it as 'unsigned' and Nokia does the rest. Hopefully this is what you meant. [quote author="tamhanna" date="1291449010"] This is a question of taste.[/quote] Only if they break compatibility. If they didn't, it would be a no-brainer. Code a little, little. Well both of you are right. Just to add the app which nokia signs cannot be distributed via any other channel. It can be distributed only via Ovi. Thats the catch for free ovi signing. I didn't say you can't sign it yourself. I'm saying that's the way you're meant to submit apps on Ovi and that's the way I do it. It fits in with QtCreator, it's the easiest and of course free. If your app needs more than basic capabilities, then you'll need to pay to sign of course. Nokia gives you free UID3's by the way. In allotments of five unless you require more. My route is 0 euros a pop. How about just use that UID3 in the first place? I don't understand. You have to stick with an old UID3 so you have to pay to sign it? This is for Qt apps. This is the first time Qt apps have been allowed on the Ovi Store. So no apps from 2008, no old UIDs. I fail to see your point. Not 2008 :P Time between SmartInstaller existance and free signing is a very short time window. Unlucky if you missed out, but it's a one-off break from UID. And it affects only a handful of apps. It also saves 20 euro (per pop as you say) for each future update. Anyway you can create a topic about it if you want. Going off-topic. Well I think you're in the minority, so I stand by my free signing of Qt apps with Nokia UIDs. Going back on-topic: So, for you, updating your projects for each QtMobility update may prove expensive but it doesn't really affect the rest of us. Especially considering these are new apps we are talking about. So I'll happily go with QtMobility to use such APIs as Camera and update when it is required. - kkrzewniak Please correct me if I'm wrong but all the APIs in QtMobility 1.1 are stated to be final (API Maturity Level -FINAL). So any app using QtMobility 1.1 should run with future mobility packages, as the API is final. "Mobility Doc": Well that is true kkrzewniak. But there's a lot of important functionality in QtMobility 1.1 that resides in the 'labs' -- this of course is not final and will change soon. Examples: - Gestures (pinch-zoom, swipe and so on) Plugin - Folder List Plugin - Particles Plugin Fortunately for the OP, Camera shouldn't change much. The APIs defined as 'FINAL' may still be extended (but still backwards compatible). - kkrzewniak Yes and that is my point exactly! Stop whining that QtMobility so bad, just stick to the parts You know to be final. I don't know why tam hates QtMobility so much. My only guess is he has created his own custom libraries to deal with camera and so on and hates that it has all been replaced by 3 lines of code :P. - tobias.hunger Moderators tamhanna: Have fun limiting your apps to the Symbian ecosystem then. Seriously: Qt is meant to make it easy to cover all (Nokia) phones, not just Symbian. You are giving up on Meego before it is even out! And there even are community ports of Qt to android and iphone... who knows, maybe those will become official someday... - tobias.hunger Moderators tamhanna: Most linux distributions are moving away from HAL... I do not know whether it will be in meego or not:-) I don't think his 'hal' header is actually related to HAL, that's just how you handle these libraries right tam? I knew you spent time on custom libraries! :P It's ok to move on to QtMobility you know. Even if it seems you've wasted time making your hal.h. I don't think Meego will use HAL though. Will it even use X11? I think Nokia wanted Wayland? OK, I'm not sure why you call it HAL :P I always call that stuff 'defines.h' or 'constants.h' or similar. This doesn't show your solution for QtMobility. What do you use? Do you have anything that could help the OP, for example, with his Camera API? Or you just write this in Symbian C++? Hi All, to be able to use camera in my app i've decided to use CCamera class and MCameraObserver2, below yuo'll find all my class code: @ //MainWindow.h #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QMainWindow> #include <ecam.h> namespace Ui { class MainWindow; } class MainWindow : public QMainWindow, public MCameraObserver2 { Q_OBJECT public: explicit MainWindow(QWidget *parent = 0); //Derived from MCameraObserver2 public: void HandleEvent(const TECAMEvent &); void ImageBufferReady(MCameraBuffer &, TInt); void VideoBufferReady(MCameraBuffer &, TInt); void ViewFinderReady(MCameraBuffer &, TInt); ~MainWindow(); private: Ui::MainWindow *ui; bool iCameraOn; private slots: void on_pushButton_clicked(); public: CCamera *iCamera; TCameraInfo iCameraInfo; }; #endif // MAINWINDOW_H @ @ //MainWindow.cpp #include <ecam.h> //---------------- //To remove #include <QDebug> //---------------- #include "MainWindow.h" #include "ui_MainWindow.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); iCamera=0; QWidget::setWindowFlags(windowFlags() | Qt::WindowSoftkeysVisibleHint); setWindowFlags(Qt::WindowSoftkeysVisibleHint); //To be ablet to see down soft keys iCameraOn=false; } MainWindow::~MainWindow() { qDebug()<<"Delete MainWindow"; if (iCamera!=0) { iCamera->Release(); delete iCamera; iCamera=0; } delete ui; } void MainWindow::on_pushButton_clicked() { //Activate Camera // iCamera=CCamera::New2L(*this,CCamera::CamerasAvailable(),(TInt) 50); TInt a_TInt_NoOfCameraAvailable=CCamera::CamerasAvailable(); int a_int_NoOfCameraAvailable=(int) a_TInt_NoOfCameraAvailable; qDebug()<<"CamerasAvailable return value: "<<a_int_NoOfCameraAvailable; if (a_int_NoOfCameraAvailable>0) { if (iCamera==NULL) { qDebug()<<"00 - Building iCamera and reserve it"; iCamera=CCamera::New2L(*this,0,0); iCamera->Reserve(); } } } void MainWindow::HandleEvent(const TECAMEvent & aEvent) { QString aString="I'm in HandleEvent"; qDebug()<<aString; switch (aEvent.iEventType.iUid) { case 270499131: { //Reserve Complete qDebug()<<"01 - Reserve Complete start power on"; if (iCameraOn==false) { qDebug()<<"MainWindow::HandleEvent iCamera->PowerOn();"; iCamera->PowerOn(); iCameraOn=true; } break; } case 270499132: { //PowerOn Complete qDebug()<<"02 - Power On complete"; iCameraInfo; iCamera->CameraInfo(iCameraInfo); iCamera->PrepareImageCaptureL(CCamera::EFormatJpeg,0,TRect(0,0,360,640)); break; } default: { qDebug()<<"aEvent= "<<(int)aEvent.iEventType.iUid; break; } } } void MainWindow::ImageBufferReady(MCameraBuffer & aCameraBuffer, TInt) { QString aString="I'm in ImageBufferReady"; qDebug()<<aString; } void MainWindow::VideoBufferReady(MCameraBuffer &aCameraBuffer, TInt) { QString aString="I'm in VideoBufferReady"; qDebug()<<aString; } void MainWindow::ViewFinderReady(MCameraBuffer &aCameraBuffer, TInt) { QString aString="I'm in ViewFinderReady"; qDebug()<<aString; } @ But when i run program console said: 'CActiveScheduler::RunIfReady() returned error: -5' Could you please le tme know where i'm wronging. Thanks.
https://forum.qt.io/topic/2061/camera-problem-on-symbain-3
CC-MAIN-2018-34
refinedweb
1,915
66.74
Python 2 to the power of 3. A lightweight porting helper library. Project description Eight is a Python module that provides a minimalist compatibility layer between Python 3 and 2. Eight lets you write code for Python 3.3+ while providing limited compatibility with Python 2.7 with no code changes. Eight is inspired by six, nine, and python-future, but provides better internationalization (i18n) support, and environment variable I/O. Decoding command-line arguments Eight provides a utility function to decode the contents of sys.argv on Python 2 (as Python 3 does). It uses sys.stdin.encoding as the encoding to do so: import eight eight.decode_command_line_args() The call to decode_command_line_args() replaces sys.argv with its decoded contents and returns the new contents. On Python 3, the call is a no-op (it returns sys.argv and leaves it intact). Wrapping environment variable getters and setters Eight provides utility wrappers to help bring Python 2 environment variable access and assignment in line with Python 3: encode the input to os.putenv (which is used for statements like os.environ[x] = y) and decode the output of os.getenv (used for x = os.environ[y]). Use wrap_os_environ_io() to monkey-patch these wrappers into the os module: import eight eight.wrap_os_environ_io() On Python 3, the call is a no-op. should be imported by name using from eight import <name> when needed: - queue (old name: Queue) - builtins (old name: __builtin__) - copyreg (old name: copy_reg) - configparser (old name: ConfigParser) - reprlib (old name: repr) - winreg (old name: _winreg) - _thread (old name: thread) - _dummy_thread (old name: dummy_thread) The following modules have attributes which resided elsewhere in Python 2: TODO Acknowledgments Python-future for doing a bunch of heavy lifting on backports of Python 3 features. Links License Licensed under the terms of the Apache License, Version 2.0. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/eight/
CC-MAIN-2022-27
refinedweb
334
58.89
The. Slowly but surely, Java is getting better with every new release and some parts of the Guava project are becoming less relevant (e.g. its common.io package that has been filling the gaps in pre-JDK 7’s file APIs, or some functional programming constructs that will be retired after Java 8 introduces the Java community to the joys of closures). Still, other parts of the Guava library will sure to stay around and be further developed. One of such vibrant parts of Guava is the Event Bus system housed in the com.google.common.eventbus package. Guava’s Event Bus is a message dispatching system designed to allow publish-subscribe style of communication between components; it is no-nonsense, lightweight and very practical. Note, that there is no JMS or Message Oriented Middleware infrastructure involved and everything happens within the run-time boundaries of the same Java application. Building an Event Bus It takes three components to build an application’s Event Bus. - The Guava’s Event (Message) Bus itself – represented by the EventBusclass that provides methods for subscribers (event listeners) to register and unregister themselves with the Bus as well as a method for dispatching events (messages) to the target subscribers - The event (message), which can be any Java object: a Date, a String, your POJO, anything that can be routed by the Bus to your subscriber - The event subscriber (listener) – an arbitrary complexity Java class that must have a specially annotated method for handling events (messages); this method is a call-back function that must return void and take one parameter of the same type as the type of the corresponding event (a Date, a String, your POJO, etc.) The Event Bus comes in two flavors: - Synchronous (backed-up by the EventBusclass), and - Asynchronous (backed-up by the AsyncEventBusclass which extends EventBus) Both classes are housed in the com.google.common.eventbus package and expose the following API operations: void register(Object) – registers and caches subscribers for subsequent event handling void unregister(Object) – undoes the register action void post(Object) – posts an event (event) to all registered subscribers Notice that the Event Bus API is totally permissive about the type of the subscribers and events. So, how do you set up the stage and go about event dispatching? Event Dispatching on the Bus Here is the sequence of simple steps to follow to make your Bus busy with events dispatching: - Create an instance of your event (message); as we said earlier, it can be any Java object - Create a subscriber to the event (message) In the subscriber class, designate and annotate one method with the com.google.common.eventbus.Subscribeannotation (which simply marks a method as an event handler). The method’s single input parameter should match the type of the event this subscriber wants to receive from the Bus. You can use any allowed identifier for the method’s name class MyStringEventSubscriber { . . . @Subscribe public void onEvent(String e) { // Handle the string passed on by the Event Bus } . . . - Instantiate the Event Bus The synchronous EventBustype gets instantiated as follows: EventBus eBus = new EventBus(); The most common way to instantiate the asynchronous AsyncEventBustype is as follows: EventBus eBus = new AsyncEventBus(java.util.concurrent.Executors.newCachedThreadPool()); - Register your subscriber(s) with the Event Bus: eBus.register(new MyStringEventSubscriber()); Note: At this point, the EventBusobject will parse your subscriber’s class definition for the presence of the @Subscribeannotated method and register it as the call-back handler for the specific type of an event it can handle - Create an instance of your event (message), set payload as needed and post (send, fire) the event through the Bus: MyEvent me = new MyEvent ("Some payload"); eBus.post(me); or Date now = new Date(); eBus.post(now); For handling situations when no matching event handler is found (e.g. because the target subscriber was unregistered, or an unsupported event was posted), the Event Bus offers an elegant and effective solution – you simply create a “Dead Event” (or “Catch-All-That-Fell-Thru-Cracks”) subscriber which acts as sort of a dead-letter queue in the conventional message queuing systems for intercepting messages that failed to be delivered to any known subscriber (destination). This subscriber must take as a parameter the predefined com.google.common.eventbus.DeadEvent event, which acts as a wrapper for the actual undelivered event. public class MyDeadEventsSubscriber { @Subscribe public void handleDeadEvent(DeadEvent deadEvent) { // You get to the actual undelivered event as follows: deadEvent.getEvent()); }} You instantiate and register the Dead Event handler with the Event Bus in the usual way: eBus.register(new MyDeadEventsSubscriber()). The synchronous event dispatching systems (supported by the EventBus class) are not that exciting – events processing will be serialized forcing the caller to wait until the target subscriber’s handler method returns before dispatching the next enqueued event. In most cases, it is not what you may want due to poor performance of this arrangement or other considerations. You will recall that asynchronous processing is backed-up by the AsyncEventBus class, which we will use in our discussion below. With the AsyncEventBus class, the event handling method of your subscriber should be additionally decorated with the com.google.common.eventbus.AllowConcurrentEvents annotation in order to designate it as an asynchronous call-back event handler. So, your event handler method should have the following signature: @Subscribe @AllowConcurrentEvents public void someMethod(YourEventBean e) { ...... } Wrapping up the Event Bus API in a Singleton One of the practical ways to expose the Event Bus functionality to the clients is to create a singleton of an instance of the AsyncEventBus class. The methods of AsyncEventBus use internal synchronization, so our singleton is thread-safe: import java.util.concurrent.Executors; import com.google.common.eventbus.AsyncEventBus; import com.google.common.eventbus.EventBus; public class EventBusService { private static EventBusService instance = new EventBusService(); public static EventBusService $() { return instance; } private EventBus eventBus = null; private EventBusService() { eventBus = new AsyncEventBus(Executors.newCachedThreadPool()); } public void registerSubscriber(Object subscriber) { eventBus.register(subscriber); } public void unRegisterSubscriber(Object subscriber) { eventBus.unregister(subscriber); } public void postEvent(Object e) { eventBus.post(e); } } And our singleton is used by clients as follows: EventBusService.$().registerSubscriber(new MyEventBusSubscriber()); MyMessage msg = new MyMessage(); msg.setPayLoad(....); EventBusService.$().postEvent(msg); Making the Event Bus Bi-Directional You may have already noticed that the Event Bus only works one way: it sends events from the caller down to the subscribers and there is no directly supported mechanism for the system to notify the caller about the outcome of the event handling operation. Wouldn’t it be great if you could fix this little but annoying shortcoming and make it bi-directional? And that’s what we are going to do right now. Here are the technical requirements of what we are going to do: - Re-use the event bean (message) for passing back the subscriber’s response (the result of handling the event) - Use Java standard concurrency API to suspend the caller thread until either of the following occurs: - the caller thread times out after waiting for a specified period of time - response as a payload to the original event is set and is ready to be fetched by the caller - Encapsulate processing logic in a helper class that can be leveraged in other event types through extension - Provide design-time type safety for the return type Here is the prototype of the solution. The Helper Class The design of this class addresses all four of the above requirements. The 4th requirement is ensured via the <T> generic type parameter. import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; /** * {@literal <<PROTOTYPE>>} * * @author Mikhail Vladimirov */ public class AsyncCallDataBean<T> { private T response = null; private CountDownLatch latch = new CountDownLatch(1); public T getReponse(long timeoutMS) { try { latch.await(timeoutMS, TimeUnit.MILLISECONDS); } catch (InterruptedException e) { e.printStackTrace(); } return response; } public void setResponse (T response) { this.response = response; latch.countDown(); } } The goal of the latch field is two-fold: - Suspend the execution of the current thread when the getResponse ()method is called (this is done on the caller’s thread) for the duration passed on as a method parameter - Wake up the current (caller’s) thread when the response is set; the latch.countDown()call will bring the value of the latch down from 1 to 0 which effectively unblocks the thread sitting in the wait state letting the caller get the response via the getResponse()method So, what you have to do is simply extend your event bean by the AsyncCallDataBean type. /** * {@literal <<EVENT>>} */ public class RequestResponseDataBean extends AsyncCallDataBean<String>{ private Object payLoad; public RequestResponseDataBean (Object payLoad) { this.payLoad = payLoad; } public Object getPayLoad() { return payLoad; } } The usage of the extension is as follows: EventBusService.$().registerSubscriber(new MyEventBusSubscriber()); RequestResponseDataBean rrb = new RequestResponseDataBean("Some event payload"); EventBusService.$().postEvent(rrb); long timeOut = 3000; // wait for 3 seconds before timing out String result = rrb.getReponse (timeOut); // will unblock on time out or call to the rrb.setResponse() ... // Somewhere on another thread … The handler method of the MyEventBusSubscriber gets the RequestResponseDataBean event and sets its response value: rrb.setReponse ("Some response"); So, the rrb.getReponse (timeOut); call will unblock either on a time out (in which case the response will be null) or when the result is ready for a pick-up, whichever occurs first. There is no conditions for a memory leak as all references to the event bean will eventually be released and it will be GCed. That’s it. We created a nice micro-architecture for a bi-directional Event Bus. Happy Bus driving! Many thanks for the article, very interesting! After trying out the Guava implementation and the request / response pattern in your article I have discovered that there is a potential deadlock. As far as I have understood when you post an event in the @Subscribe method the event gets added to the queue. If we have this scenario: xxx events -> @Subscribe method -> Send RequestResponseDataBean -> @Subscribe method (responds back using the setResponse(). We could end up with a scenario where the RequestResponseDataBean ends up in the queue after the xxx events, and the xxx events not being procsses as they are waiting for the latch countdown. Not sure how this should be best handled. Two event bus instances comes to mind, but perhaps there is some other solution. Nice 🙂 Nice article. I reinvented your request with Latched response this morning.
https://www.webagesolutions.com/blog/building-a-bi-directional-event-bus-with-the-google-guava
CC-MAIN-2022-05
refinedweb
1,720
50.06
From Jenkins to Jenkins X This is a tale of dailymotion’s journey from Jenkins to Jenkins X, the issues we had, and how we solved them. Our context At dailymotion, we strongly believe in devops best practices, and are heavily investing in Kubernetes. Part of our products are already deployed on Kubernetes, but not all of them. So, when the time came to migrate our ad-tech platform, we wanted to fully embrace “the Kubernetes way” — or cloud-native, to be buzzword-compliant! This meant redefining our whole CI/CD pipeline and moving away from static/permanent environments, in favour of dynamic on-demand environments. Our goal was to empower our developers, reduce our time to market and reduce our operation costs. Our initial requirements for the new CI/CD platform were: - avoid starting from scratch if possible: our developers are already used to using Jenkins and the declarative pipelines, and those are working just fine for our current needs - target a public cloud infrastructure — Google Cloud Platform — and Kubernetes clusters - be compatible with the gitops methodology — because we love version control, peer review, and automation There are quite a few actors in the CI/CD ecosystem, but only one matched our needs, Jenkins X, based on Jenkins and Kubernetes, with native support for preview environments and gitops. Jenkins on Kubernetes The Jenkins X setup is fairly straightforward, and already well documented on their website. As we’re already using Google Kubernetes Engine (GKE), the jx command-line tool created everything by itself, including the Kubernetes cluster. Cue the little wow effect here, obtaining a complete working system in a few minutes is quite impressive. Jenkins X comes with lots of quickstarts and templates to add to the wow effect, however, at dailymotion we already have existing repositories with Jenkins pipelines that we’d like to re-use. So, we’ve decided to do things “the hard way”, and refactor our declarative pipelines to make them compatible with Jenkins X. Actually, this part is not specific to Jenkins X, but to running Jenkins on Kubernetes, based on the Kubernetes plugin. If you are used to “classic” Jenkins, with static slaves running on bare metal or VMs, the main change here is that every build will be executed on its own short-lived custom pod. Each step of the pipeline can then specify on which container of the pod it should be executed. There are a few examples of pipelines in the plugin’s source code. Our “challenge” here was to define the granularity of our containers, and which tools they’d contain: enough containers so we can reuse their images between different pipelines, but not too many either to keep maintenance under control — we don’t want to spend our time rebuilding container images. Previously, we used to run most of our pipelines steps in Docker containers and when we needed a custom one, we built it on-the-fly in the pipeline, just before running it. It was slower, but easier to maintain, because everything is defined in the source code. Upgrading the version of the Go runtime can be done in a single pull-request, for example. So, having to pre-build our container images sounded like adding more complexity to our existing setup. It also has a few advantages: less duplication between repositories, faster builds, and no more build errors because some third-party hosting platform is down. Building images on Kubernetes Which bring us to an interesting topic these days: building container images in a Kubernetes cluster. Jenkins X comes with a set of build packs, that uses “Docker in Docker” to build images from inside containers. But with the new container runtimes coming, and Kubernetes pushing its Container Runtime Interface (CRI), we wanted to explore other options. Kaniko was the most mature solution, and matched our needs / stack. We were thrilled… …until we hit 2 issues : - the first one was a blocking issue for us: multi-stage builds didn’t work. Thanks to Google we quickly found that we were not the only ones affected, and that there was no fix or work-around yet. However, Kaniko is developed in Go, and we are Go developers, so… why not have a look at the source code? Turns out that once we understood the root cause of the issue, the fix was really easy. The Kaniko maintainers were helpful and quick to merge the fix, so one day later a fixed Kaniko image was already available. - the second one was that we couldn’t build two different images using the same Kaniko container. This is because Jenkins isn’t quite using Kaniko the way it is meant to be used — because we need to start the container first, and then run the build later. This time, we found a workaround on Google: declaring as many Kaniko containers as we need to build images, but we didn’t like it. So back to the source code, and once again once we understood the root cause, the fix was easy. We tested a few solutions to build our custom “tools” images for the CI pipelines, in the end we chose to use a single repository, with one image — Dockerfile — per branch. Because we are hosting our source code on Github, and using the Jenkins Github plugin to build our repositories, it can build all our branches and create new jobs for new branches on webhook events, which make it easy to manage. Each branch has its own Jenkinsfile declarative pipeline, using Kaniko to build the image — and pushes it to our container registry. It’s great for quickly adding a new image, or editing an existing one, knowing that Jenkins will take care of everything. The importance of declaring the requested resources One of the major issue we encountered with our previous Jenkins platform, came from the static slaves/executors, and the sometimes-long build queues during peak hours. Jenkins on Kubernetes makes it easy to solve this issue, mainly when running on a Kubernetes cluster that supports cluster autoscaler. The cluster will simply add or remove nodes based on the current load. But this is based on the requested resources, not on the observed used resources. It means that it’s our job, as developers, to define in our build pod templates, the requested resources — in term of CPU and memory. The Kubernetes scheduler will then use this information to find a matching node to run the pod — or it may decide to create a new one. This is great, because we no longer have long build queues. But instead we need to be careful in defining the right amount of resources we need, and updating them when we update our pipeline. As resources are defined at the container level, and not the pod level, it makes things a little more complex to handle. But we don’t care about limits, only requests. And a pod’s requests are just the addition of the requests of all containers. So, we just write our resources requests for the whole pod on the first container — or on the jnlp one — which is the default. Here is an example of one of our Jenkinsfile, and how we can declare the requested resources: Preview environments on Jenkins X Now that we have all our tools, and we’re able to build an image for our application, we’re ready for the next step: deploying it to a “preview environment”! Jenkins X makes it easy to deploy preview environments, by reusing existing tools — mainly Helm, as long as you follow a few conventions, for example the names of the values for the image tag. It’s best to copy/paste from the Helm charts provided in the “packs”. If you are not familiar with Helm, it’s basically a package manager for Kubernetes applications. Each application is packaged as a “chart”, which can then be deployed as a “release” by using the helm command-line tool. The preview environment is deployed by using the jx command-line tool, which takes care of deploying the Helm chart, and commenting on the Github pull-request with the URL of the exposed service. This is all very nice, and worked well for our first POC using plain http. But it’s 2018, nobody does http anymore. Let’s encrypt! Thanks to cert-manager, we can automatically get an SSL certificate for our new domain name when creating the ingress resource in Kubernetes. We tried to enable the tls-acme flag in our setup — to do the binding with cert-manager — but it didn’t work. This gave us the opportunity to have a look at the source code of Jenkins X — which is developed in Go too. A little fix later we were all good, and we can now enjoy a secured preview environment with automatic certificates provided by let’s encrypt. The other issue we had with the preview environments is related to the cleanup of said environments. A preview environment is created for each opened pull-request, and so should be deleted when the pull-request is merged or closed. This is handled by a Kubernetes Job setup by Jenkins X, which deletes the namespace used by the preview environment. The issue is that this job doesn’t delete the Helm release — so if you run helm list for example, you will still see a big list of old preview environments. For this one, we decided to change the way we used Helm to deploy a preview environment. The Jenkins X team already wrote about these issues with Helm and Tiller — the server side component of Helm — and so we decided to use the helmTemplate feature flag to use Helm as a templating rendering engine only, and process the resulting resources using kubectl. That way, we don’t “pollute” our list of Helm releases with temporary preview environments. Gitops applied to Jenkins X At some point of our initial POC, we were happy enough with our setup and pipelines, and wanted to transform our POC platform into a production-ready platform. The first step was to install the SAML plugin to setup our Okta integration — to allow our internal users to login. It worked well, and then a few days later, I noticed that our Okta integration was not there anymore. I was busy doing something else, so I just asked my colleague if he’d made some changes and moved on to something else. But when it happened a second time a few days later, I started investigating. The first thing I noticed was that the Jenkins pod had recently restarted. But we have a persistent storage in place, and our jobs are still there, so it was time to take a closer look! Turns out that the Helm chart used to install Jenkins has a startup script that resets the Jenkins configuration from a Kubernetes configmap. Of course, we can’t manage a Jenkins running in Kubernetes the same way we manage a Jenkins running on a VM! So instead of manually editing the configmap, we took at step back, and looked at the big picture. This configmap is itself managed by the jenkins-x-platform, so upgrading the platform would reset our custom changes. We needed to store our “customization” somewhere safe and track our changes. We could go the Jenkins X way, and use an umbrella chart to install/configure everything, but this method has a few drawbacks: it doesn’t support “secrets” — and we’ll have some sensitive values to store in our git repository — and it “hides” all the sub-charts. So, if we list all our installed Helm releases, we’ll only see one. But there are other tools based on Helm, which are more gitops-friendly. Helmfile is one of them, and it has native support for secrets, through the helm-secrets plugin, and sops. I won’t go into the details of our setup right now, but don’t worry, it will be the topic of my next blog post! The migration Another interesting part of our story is the actual migration from Jenkins to Jenkins X. And how we handled repositories with 2 build systems. At first, we setup our new Jenkins to build only the “jenkinsx” branches, and we updated the configuration of our old Jenkins to build everything except the “jenkinsx” branch. We planned to prepare our new pipelines in the “jenkinsx” branch, and merge it to make the move. For our initial POC it worked nicely, but when we started playing with preview environments, we had to create new PR, and those PR were not built on the new Jenkins, because of the branch restriction. So instead, we chose to build everything on both Jenkins instances, but use the Jenkinsfile filename for the old Jenkins, and the Jenkinsxfile filename for the new Jenkins. After the migration, we’ll update this configuration, and renaming the files, but it’s worth it, because it enables us to have a smooth transition between both systems, and each project can migrate on its own, without affecting the others. Our destination So, is Jenkins X ready for everybody? Let’s be honest: I don’t think so. Not all features and supported platforms — git hosting platforms or Kubernetes hosting platforms — are stable enough. But if you’re ready to invest enough time to dig in, and select the stable features and platforms that work for your use-cases, you’ll be able to improve your pipelines with everything required to do CI/CD and more. This will improve your time to market, reduce your costs, and if you’re serious about testing too, be confident about the quality of your software. At the beginning, we said that this was the tale of our journey from Jenkins to Jenkins X. But our journey isn’t over, we are still traveling. Partly because our target is still moving: Jenkins X is still in heavy development, and it is itself on its own journey towards Serverless, using the Knative build road for the moment. Its destination is Cloud Native Jenkins. It’s not ready yet, but you can already have a preview of what it will look like. Our journey also continues because we don’t want it to finish. Our current destination is not meant to be our final destination, but just a step in our continuous evolution. And this is the reason why we like Jenkins X: because it follows the same pattern. So, what are you waiting to embark on your own journey?
https://medium.com/dailymotion/from-jenkins-to-jenkins-x-604b6cde0ce3?source=collection_home---4------7---------------------
CC-MAIN-2019-18
refinedweb
2,439
57.81
----------------------------------------------------------------------------- -- | -- Module : ForSyDe.Shallow.CTLib -- Copyright : (c) SAM Group, KTH/ICT/ECS 2007-2008 -- License : BSD-style (see the file LICENSE) -- -- Maintainer : forsyde-dev@ict.kth.se -- Stability : experimental -- Portability : portable -- -- This is the ForSyDe library for continuous time MoC (CT-MoC). -- Revision: $Revision: 1.7 $ -- Id: $Id: CTLib.hs,v 1.7 2007/07/11. ---------------------------------------------------------------------------- module ForSyDe.Shallow.CTLib ( -- module ForSyDe.Shallow.CoreLib, -- * The signal data type SubsigCT(..), timeStep, -- * Primary process constructors combCT, combCT2, mooreCT, mealyCT, delayCT, shiftCT, initCT, -- * Derived process constructors -- | These constructors instantiate very useful processes. -- They could be defined in terms of the basic constructors -- but are typically defined in a more direct way for -- the sake of efficieny. scaleCT, addCT, multCT, absCT, -- * Convenient functions and processes -- | Several helper functions are available to obtain parts -- of a signal, the duration, the start time of a signal, and -- to generate a sine wave and constant signals. takeCT, dropCT, duration, startTime, sineWave, constCT, zeroCT, -- * AD and DA converters DACMode(..), a2dConverter, d2aConverter, -- * Some helper functions applyF1, applyF2, applyG1, cutEq, -- * Sampling, printing and plotting -- $plotdoc plot, plotCT, plotCT' ,showParts, vcdGen) where import ForSyDe.Shallow.CoreLib import System.Cmd import System.Time import System.IO import System.Directory --import Control.Exception import Data.Ratio import Numeric() -- The revision number of this file: revision :: String revision=filter (\ c -> (not (c=='$'))) "$Revision: 1.7 $, $Date: 2007/07/11 08:38:34 $" -- |The type of a sub-signal of a continuous signal. It consisits of the -- function and the interval on which the function is defined. -- The continuous time signal is then defined as a sequence of SubsigCT -- elements: Signal SubsigCT data (Num a) => SubsigCT a = SubsigCT ((Rational -> a), -- The function Time -> Value (Rational,Rational)) -- The interval on which the -- function is defined instance (Num a) => Show (SubsigCT a) where show ss = show (sampleSubsig timeStep ss) --unit :: String -- all time numbers are in terms of this unit. --" label) ++(show sigid)++".dat", (mkLabel label sigid)) mkLabel :: String -> Int -> String mkLabel "" n = "sig-" ++ show n mkLabel l _ = l mkAllLabels :: (Num a) => [(Int,String,[(Rational,a)])] -> String mkAllLabels sigs = drop 2 (foldl f "" sigs) where f labelString (n,label,_) = labelString ++ ", " ++ (mkLabel label n) replChar :: String -- all characters given in this set are replaced by '_' -> String -- the string where characters are replaced -> String -- the result string with all characters replaced replChar [] s = s replChar _ [] = [] replChar replSet (c:s) | elem c replSet = '_' : (replChar replSet s) | otherwise = c : (replChar replSet s) dumpSig :: (Num a) => [(Rational,a)] -> String dumpSig points = concatMap f points where f (x,y) = show ((fromRational x) :: Float) ++ " " ++ show (y) ++ "\n" mkPlotScript :: [(String -- the file name of the dat file ,String -- the label for the signal to be drawn )] -> String -- the gnuplot script mkPlotScript ns = "set xlabel \"seconds\" \n" ++ "plot " ++ (f1 ns) ++ "\n" ++ "set terminal postscript eps color\n" ++ "set output \"" ++ plotFileName++".eps\"\n" ++ "replot \n" ++ "set terminal epslatex color\n" ++ "set output \"" ++ plotFileName++"-latex.eps\"\n" ++ "replot\n" -- ++ "set terminal pdf\n" -- ++ "set output \"fig/ct-moc-graph.pdf\"\n" -- ++ "replot\n" where f1 :: [(String,String)] -> String f1 ((datfilename,label):(n:ns)) = "\t\"" ++ datfilename ++ "\" with linespoints title \""++label++"\",\\\n" ++ " " ++ (f1 (n:ns)) f1 ((datfilename,label):[]) = "\"" ++ datfilename ++ "\" with linespoints title \""++label++"\"\n" f1 [] = "" plotFileName = "fig/ct-moc-graph-" ++ (f2 ns) f2 :: [(String,String)] -> String -- f2 generates part of the -- filename for the eps and -- latex files, which is -- determined by the signal -- labels. f2 [] = "" f2 ((_,label):[]) = label f2 ((_,label):_) = label ++ "_" -- tryNTimes applies a given actions at most n times. Everytime -- the action is applied and an error occurrs, it tries again but -- with a decremented first argument. It also changes the file name -- because the file name uses the n as part of the name. -- The idea is that the action tries different files to operate on. -- The problem was that when gnuplot was called on a gnuplot script -- file, it was not possible to write a new script file with the same -- name and start a new gnuplot process (at least not with ghc or ghci on -- cygwin; it worked fine with hugs on cygwin). -- So we go around the problem here by trying different file names until -- we succeed or until the maximum number of attempts have been performed. tryNTimes :: Int -> (String -> IO ()) -> IO String tryNTimes n a | n <= 0 = error "tryNTimes: not succedded" | n > 0 = do catch (action fname a) (handler a) where handler :: (String -> IO()) -> IOError -> IO String handler a _ = tryNTimes (n-1) a fname = "./fig/ct-moc-" ++ (show n) ++ ".gnuplot" action :: String -> (String -> IO ()) -> IO String action fname a = do (a fname) return fname tryNTimes _ _ = error "tryNTimes: Unexpected pattern." ---------------------------------------------------------------------------- {- |/ct-moc.vcd. If the file exists, it is overwritten. If the directory does not exist, it is created. -} vcdGen :: (Num a) => Rational -- ^Sampling period; defines for what -- time stamps the values are written. -> [(Signal (SubsigCT a), String)] -- ^A list of (signal,label) pairs. The signal values written and -- denoted by the corresponding labels. -> IO String -- ^A simple message to report completion vcdGen _ [] = return [] vcdGen 0 _ = error "vcdgen: Cannot compute signals with step=0.\n" vcdGen step sigs = do -- putStr (show (distLabels (expandSig 1 writeVCDFile sigs -- We return some reporting string: return ("Signal(s) " ++(mkAllLabels sigs) ++ " dumped.") mkLabel :: String -> Int -> String mkLabel "" n = "sig-" ++ show n mkLabel l _ = l mkAllLabels sigs = drop 2 (foldl f "" sigs) where f labelString (n,label,_) = labelString ++ ", " ++ (mkLabel label n) writeVCDFile :: (Show a) => [(Int,String,[(Rational,a)])] -> IO() writeVCDFile sigs = do mkDir "./fig" clocktime <- getClockTime let {date = calendarTimeToString (toUTCTime clocktime); labels = getLabels sigs; timescale = findTimescale sigs;} in writeFile mkVCDFileName ((vcdHeader timescale labels date) ++ (valueDump timescale (prepSigValues sigs))) mkVCDFileName :: String mkVCDFileName = ("./fig/ct-moc.vcd") mkDir :: String -> IO() mkDir dir = do dirExists <- doesDirectoryExist dir if (not dirExists) then (createDirectory dir) else return () -- prepSigValues rearranges the [(label,[(time,value)])] lists such -- that we get a list of time time stamps and for each time stamp -- we have a list of (label,value) pairs to be dumped: prepSigValues :: (Show a) => [(Int,String,[(Rational,a)])] -> [(Rational,[(String,a)])] prepSigValues sigs = f2 (distLabels sigs) where -- f2 transforms a [[(label,time,value)]] -- into a [(time, [label,value])] structure: f2 :: (Show a) => [[(String,Rational,a)]] -> [(Rational,[(String,a)])] f2 [] = [] f2 ([]:_) = [] f2 xs = f3 hdxs : f2 tailxs where -- here we take all first elements of the lists in xs -- and the tail of the lists in xs: (hdxs,tailxs) = (map g1 xs, map (\ (_:ys)-> ys) xs) g1 [] = error ("prepSig.f2.g1: first element of xs is empty:" ++ "xs="++show xs) g1 (y:_) = y f3 :: (Show a) => [(String,Rational,a)] -> (Rational,[(String,a)]) f3 (valList@((_, time, _):_)) = (time, f4 time valList) f3 [] = error ("prepSigValues.f2.f3: " ++ "empty (label,time,value)-list") f4 :: (Show a) => Rational -> [(String,Rational,a)] -> [(String,a)] f4 _ [] = [] f4 time ((label,time1,value):valList) | time == time1 = (label,value) : f4 time valList | otherwise = error ("prepSigValues: Time stamps in different" ++ " signals do not match: time=" ++(show time)++", time1="++(show time1) ++", label="++label++", value="++(show value) ++"!") -- distLabels inserts the labels into its corresponding -- (time,value) pair list to get a (label,time,value) triple: distLabels :: [(Int,String,[(Rational,a)])] -> [[(String,Rational,a)]] distLabels [] = [] distLabels ((_,label,valList):sigs) = (map (\ (t,v) -> (label,t,v)) valList) : (distLabels sigs) getLabels :: [(Int,String,[(Rational,a)])] -> [String] getLabels = map (\(_,label,_)-> label) vcdHeader :: Rational -> [String] -> String -> String vcdHeader timescale labels date = "$date\n" ++ date ++ "\n" ++ "$end\n" ++ "$version\n" ++ "ForSyDe CTLib " ++ revision ++ "\n" ++ "$end\n" ++ "$timescale 1 " ++ (timeunit timescale) ++ " $end\n" ++ "$scope module top $end\n" ++ (concatMap (\ label -> ("$var real 64 "++label ++ " " ++ label ++ " $end\n")) labels) ++ "$upscope $end\n" ++ "$enddefinitions $end\n" ++ "#0\n" ++ "$dumpvars\n" ++ (concatMap (\ label -> "r0.0 "++label++ "\n") labels) ++ "\n" valueDump :: (Show a) => Rational -> [(Rational,[(String,a)])] -> String valueDump _ [] = "" valueDump timescale ((t,values):valList) = "#"++(show (g (t/timescale)))++"\n" ++ (f values) ++ (valueDump timescale valList) where f :: (Show a) => [(String,a)] -> String f [] = "" f ((l,v):values) = "r"++(show v)++" "++l++"\n" ++ (f values) g :: Rational -> Integer -- Since the VCD format expects integers for the timestamp, we make -- sure that only an integer is printed in decimal format (no exponent): g t = round t timeunit :: Rational -> String timeunit timescale | timescale == 1 % 1 = "s" | timescale == 1 % 1000 = "ms" | timescale == 1 % 1000000 = "us" | timescale == 1 % 1000000000 = "ns" | timescale == 1 % 1000000000000 = "ps" | timescale == 1 % 1000000000000000 = "fs" | otherwise = error ("timeunit: unexpected timescale: " ++ (show timescale)) findTimescale :: [(Int,String,[(Rational,a)])] -> Rational findTimescale sigs = f 1 (concatMap (\ (_,_,valList) -> (fst (unzip valList))) sigs) where f :: Rational -> [Rational] -> Rational f scale [] = scale f scale (x:xs) | r == 0 = f scale xs | otherwise = f (scale/1000) xs where (_,r) = (properFraction (abs (x / scale))) :: (Int,Rational) ------------------------------------------------------------------------- ----------------------------------------------------------- -- Testing the CT signals and process constructors: {-- main = testAll testAll = do testScaleCT testAddCT testMultCT testAbsCT testDelayCT testConverters testFeedback -- test scaleCT: testScaleCT = plotCT' 1E-4 [(toneA, "Tone A"), ((scaleCT 1.5 toneA), "Tone A x 1.5"), ((scaleCT 2.0 toneA), "Tone A x 2.0")] -- test addCT: testAddCT = plotCT' 1e-4 [(toneA, "Tone A"), (toneE, "Tone E"), ((addCT toneA toneE), "Tones A+E")] -- test multCT: testMultCT = plotCT' 1e-4 [(toneA, "Tone A"), (toneE, "Tone E"), ((multCT toneA toneE), "Tones A x E")] -- test absCT: testAbsCT = plotCT' 1E-4 [(toneA, "Tone A"), ((absCT toneA), "abs (Tone A)")] -- test delayCT: testDelayCT = plotCT' 1E-4 [(toneA, "Tone A"), (takeCT 0.02 ((delayCT 0.0025 toneA)), "Tone A delayed by 2.5ms"), (takeCT 0.02 ((shiftCT 0.003 toneA)), "Tone A shifted by 3ms")] -- test a2dConverter: testConverters = do (plotCT' 1e-4 [(toneA, "Tone A"), (d2aConverter DAlinear 1e-3 (a2dConverter 1e-3 toneA), "Tone A (A->D->A) converted with linear mode, 1ms sampling period")]) (plotCT' 1e-4 [(toneA, "Tone A"), (d2aConverter DAhold 1e-3 (a2dConverter 1e-3 toneA), "Tone A (A->D->A) converted with hold mode, 1ms sampling period")]) -- test a feed back loop: block sin = [sin,s1,s2,sout] where sout = p2 s1 s1 = p1 sin s2 s2 = p3 sout -- The individual processes: p1 :: Signal (SubsigCT Double) -> Signal (SubsigCT Double) -> Signal (SubsigCT Double) p2,p3 :: Signal (SubsigCT Double) -> Signal (SubsigCT Double) p1 = addCT p2 = scaleCT 0.5 p3 = initCT (zeroCT 0.0005) testFeedback = plotCT' 0.0001 ss where ss = [(sin, "sin"), (s1, "s1"), (s2, "s2"), (sout, "sout")] [sin,s1,s2,sout] = block (takeCT 0.005 toneA) toneA = sineWave (440.0) (0, 0.02) toneE = sineWave 520.0 (0, 0.02) -} {- Some performance tests on my laptop under cygwin: *********************************************************************** With ghc: with toneA = sineWave (440.0) (0, 2.0) toneE = sineWave 520.0 (0, 2.0) **** we make testAll with Double data types on ghc --make CTLib.hs -O3 -o ttt.exe time ttt real 0m33.749s user 0m0.010s sys 0m0.010s **** we make testAll with Rational data types on ghc --make CTLib.hs -O3 -o ttt.exe time ttt real 0m53.687s user 0m0.010s sys 0m0.010s **** hence the performance penalty when using Rational instead of Double is 1.59 (60%) longer delay. ************************************************************************ On hugs: (when using 0.2 second long waves, hugs aborted with an out of memory message both with Double and Rational; but with Double it aborted much faster;) toneA = sineWave (440.0) (0, 0.02) toneE = sineWave 520.0 (0, 0.02) **************** **** with Double: time runhugs.exe -h500k CTLib.hs real 0m1.702s user 0m0.020s sys 0m0.010s ****************** **** with Rational: time runhugs.exe -h500k CTLib.hs real 0m21.501s user 0m0.010s sys 0m0.020s **************** hence we have a factor of 12.5 longer delay with Rational compared to Double. -}
http://hackage.haskell.org/package/ForSyDe-3.1.1/docs/src/ForSyDe-Shallow-CTLib.html
CC-MAIN-2015-18
refinedweb
1,898
54.52
17 posts in this topic You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy! Register a new account Already have an account? Sign in here. Similar Content - By ALIENQuake Hello everyone, Finally I decide to ask hard question about one of the project which I currently maintain: Big World Setup aka mod installer for infinity engine games like BG, IWD, PST etc Project page: More screenshot: General Features downloading mods (please see remarks!) easy mod installation correct install order of mods/components handle mod and components conflicts and auto solve them easy backup creation/restoring ability to add you own mods Internal Features (every single feature which you see here is already working in autoit) It look as simple GUI application but it has quite complicated logic regarding "handle mod and components conflicts and auto solve them" - this is most important feature of the app. This app needs to be converted into multi-platform GUI application because Enhanced Editions of the game can be played on OSX and Linux also. But for the past 6 years, there wasn't a single gamer/developer who would try to convert this app using multi-platform language and GUI. This is the moment when I'm asking for help: - Which language would suit the best for multi-platform GUI application? c#,python,java or other? - Is there any general approach for such conversion? - Does autoit community/developer have some experience with converting autoit GUI applications into multi-platform GUI app by using multi-platform language like c#,python,java - Is there someone who isn't scared by looking at the source code of the application and feature list to help me with converting or even begin with creating multi-platform GUI app template which will just simply run the same commandline for every system ? If there is something else which you would know, pleas ask and I will try to answer my best. - By JohnOne I'm trying to follow a tutorial on youtube dealing with service based database. In the video, the guy adds a service based database to project, and a DataSource Configuration Wizard pops up, where he selects entity data model, which maps a load of stuff, creates classes automatically, has diagrams the lot, an all singing all dancing dealy, but in my vs2015 it does not pop up, and I don't know how to get to the same place in another fashion, I've only ever used sqlite see. Hoping someone know how to do what he does some other way. Skip to about 2 minutes if you are interested and can be bothered. - By LuuQuangICT Hello every body!!! I was try this code is work in AutoIt (au3) #include <MsgBoxConstants.au3> #include <GuiListView.au3> Global $hWndWindow = WinGetHandle("Form1") Global $hWndLv = ControlGetHandle($hWndWindow, "", "WindowsForms10.SysListView32.app.0.bf7771_r11_ad11") MsgBox($MB_SYSTEMMODAL, "", ControlListView($hWndWindow, "", $hWndLv, "GetItemCount")) _GUICtrlListView_ClickItem($hWndLv, 1, "left", False, 2) But now, I want to use _GUICtrlListView_ClickItem in C#. I was add reference AutoItX3.dll and AutoItX3.Assembly.dll for my project but i could not found any function which same _GUICtrlListView_ClickItem from them. Please help me - By Vexhero Hello!, 504,0x000001,3) ;Search If NOT(@error) Then ; MsgBox(0,"Title","Found It!") $it_full = True Sleep(8000) EndIf WEnd - By Chocolade I ?
https://www.autoitscript.com/forum/topic/161349-stopwatch-udf-based-on-the-concept-in-net/
CC-MAIN-2017-17
refinedweb
559
52.09
None 0 Points Nov 20, 2011 05:52 PM|milanbetter|LINK Hello, I have noticed some strange behavior when PlaceHolder is used to dynamicly load controls. I suspect to some kind of BUG or behaivor at design. ViewState is lost when control Text property is assigned before control is added to PlaceHolder. Let me explain: When line of code: PlaceHolder1.Controls.Add(lb) is before assigning Text property:lb.Text = "SOME TEXT", system works well and ViewState is normaly read after postback. But then PlaceHolder1.Controls.Add(lb) is after lb.Text = "SOME TEXT", ViewState fails to run correctly and lb.Text is lost after PostBack. I use Asp.Net 3.5 (Visual Studio 2008 SP1). Here are sample of not working code, but if you move PlaceHolder1.Controls.Add(lb) just before lb.Text assgning it will works fine. Best wishes Milan Default.aspx.cs and Default.aspx are using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Text; namespace WebApplication1 { public partial class _Default : System.Web.UI.Page { protected void Page_Init(object sender, EventArgs e) { Label lb = new Label(); if (!IsPostBack) lb.Text = "SOME TEXT"; PlaceHolder1.Controls.Add(lb); } } } <%@ Page <html xmlns="" > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:PlaceHolder</asp:PlaceHolder> </div> <asp:Button </form> </body> </html> Star 12060 Points Nov 21, 2011 08:48 PM|atconway|LINK milanbetterViewState fails to run correctly and lb.Text is lost after PostBack Just remember that dynamically created controls will always be lost after postback regardless if using a PlaceHolder or not. You will need to recreate them and add them back to the Placeholder on every postback. I suggest in the Page_OnInit() event or somewhere similar. Have a look to the following for additional information: As far as the order of instantion, assignment of properties, and being added to the Placeholder, the MSDN does it in exactly the aforementioned order. Take a look to the following for a code example: How to: Add PlaceHolder Web Server Controls to a Web Forms Page: I think the behavior you are seeing is by design and might be where you are doing the assignment. You should really have your code in the OnInit() event. The following post highlights a similar issue that you are having: Dynamically Loaded Control can not maintain values at PostBack? "Figure 4 illustrates the sequence of events that transpire, highlighting why the change to the Label's Text property needs to be stored in the view state." Understanding ASP.NET View State (above excerpt): If you really think there is a reproducible bug and its not by design, you can report it to Microsoft Connect below: Hope this helps! None 0 Points Nov 22, 2011 12:51 AM|milanbetter|LINK 2 replies Last post Nov 22, 2011 12:51 AM by milanbetter
http://forums.asp.net/p/1741435/4694168.aspx?Re+PlaceHolder+and+ViewState+Bug+
CC-MAIN-2015-06
refinedweb
485
59.8
- NAME - DESCRIPTION - HOW DO I ... - Catching Action Result in Dispatcher - Catching POST data in Dispatcher - Upload file via Action parameter - Use PostgreSQL instead of the default SQLite database. - Create an LDAP autocomplete field - Add Atom/RSS Feeds ? - Use date or time objects with the database? - Emulate 'created_on' field like Rails ? - Emulate 'updated_on' ? - Limit access to pages to logged-in users - Run my Jifty app as FastCGI in Apache/Lighttpd ? - Take actions based on data in URLs - Pass HTML form input directly to components - Perform database migration - Use different table names than the ones Jifty automatically creates - Perform ajax canonicalization on a given field ? - Use iepngfix.htc to add PNG support in IE5.5+ - Render model refers_to field as a select widget with meaningful display name - Create mutually dependent models - Reuse Jifty models and actions outside of a Jifty app - Send out dynamically created binary data - Create a many-to-many relationship - Show login box on an action submit - Append a new region based upon the result of the last action using AJAX NAME Jifty::Manual::Cookbook - Recipes for common tasks in Jifty DESCRIPTION This document aims to provide solutions to common questions of "How do I do x with Jifty?" While the solutions here certainly aren't the only way to do things, they're generally the solutions the developers of Jifty use, and ones that work pretty well. HOW DO I ... Catching Action Result in Dispatcher To get action result in your dispatcher: on '/=/your_action_name/' => run { my $result = Jifty->web->response->result( 'action-moniker' ); }; Catching POST data in Dispatcher In your MyApp::Dispatcher: on POST '/=/catch' => run { my $data = get('data'); }; Upload file via Action parameter In your action schema: use Jifty::Action schema { param image => type is 'upload' label is _('Upload a file'); }; to catch the file , in your take_action method: sub take_action { my $self = shift; my $fh = $self->argument_value('image'); local $/; my $file_content = <$fh> close $fh; } Use PostgreSQL instead of the default SQLite database. You need to modify your etc/config.yml file. Database: AutoUpgrade: 1 CheckSchema: 1 Database: twetter Driver: SQLite Please change Driver "SQLite" to "Pg", and make sure that you've installed DBD::Pg. Create an LDAP autocomplete field You need an action in your application. Then run jifty action --name LdapSearch in lib/myApp/Action/LdapSearch.pm add the search field use Jifty::Action schema { param search => autocompleter is \&ldap_search; } we need Net::LDAP and an accessor to our LDAP value. use Net::LDAP; __PACKAGE__->mk_accessors(qw(LDAP)); and we can write our ldap_search function. Search need at least 3 characters and return an array of DisplayName (login) sub ldap_search { my $self = shift; my $search = shift; my @res; if (length $search > 2) { if (! $self->LDAP() ) { $self->LDAP( Net::LDAP->new('ldap.myorg.org'); $self->LDAP()->bind( ); } $self->LDAP()->search( base => 'ou=people,dc=myorg,dc=org', filter => '(cn='.$filter.')', attrs => ['uid','cn','givenname','displayname'], sizelimit => 10 ); foreach my $entr ( $result->sorted('cn') ) { push @res, $entr->get_value('displayname').' ('.$entr->get_value('uid').')'; } } return @res; } Add Atom/RSS Feeds ? You could generate atom/rss feeds for virtually any model in your application. For instance, suppose there's a "Post" model (like a blog entry), you could use XML::Feed to do this: # In '/feed' template <%args> $type </%args> <%init> use XML::Feed; my $posts = MyApp::Model::PostCollection->new(); $posts->unlimit(); my $feed = XML::Feed->new( $type ); $feed->title( Jifty->config->framework('ApplicationName') . " Feed" ); $feed->link( Jifty->web->url ); $feed->description( $feed->title ); while( my $post = $posts->next ) { my $feed_entry = XML::Feed::Entry->new($type); $feed_entry->title($post->title); $feed_entry->author($post->author->username); $feed_entry->link( Jifty->web->url . "/posts/" . $post->id ); $feed_entry->issued( $post->created_on ); $feed_entry->summary( $post->body ); $feed->add_entry( $feed_entry ); } </%init> <% $feed->as_xml |n %> And add this in MyApp/Dispatcher.pm to make URI look prettier: # note the case of the feed types on qr{^/feed/(Atom|RSS)}, run { set type => $1; show('/feed'); }; And of course, you need to put these in your HTML header template (conventionally that's /_elements/header): <link rel="alternate" type="application/atom+xml" title="Atom" href="/feed/atom" /> <link rel="alternate" type="application/rss+xml" title="RSS" href="/feed/rss" /> Use date or time objects with the database? On your columns, specify either filters are 'Jifty::DBI::Filter::DateTime' for a timestamp (date and time), or filters are 'Jifty::DBI::Filter::Date' for just a date. Jifty will then automatically translate to and from DateTime objects for you when you access the column on your model. Additionally, if you add: filters are qw(Jifty::Filter::DateTime Jifty::DBI::Filter::Date) Jifty will inspect the model's current_user for a time_zone method, and, if it exists, set the retrieved DateTime object's time zone appropriately. All dates are stored in UTC in the database, to ensure consistency. Emulate ::DBI::Filter::DateTime'; This approach is not really accurate, if you render this field in a form, then the defer value is evaluated by the time of rendering, which might be way earlier then the creation of record. However, it is the easiest one. If you're using the newly recommended JIfty::DBI::Record schema {} to declare your schema, you might find this trick not working at the moment. Please override model's before_create method instead: sub before_create { my ($self, $attr) = @_; $attr->{'created_on'} = DateTime->now; }; Emulate 'updated_on' ? If a lot of column could change, you can override _set method: sub _set { my $self = shift; my ($val, $msg) = $self->SUPER::_set(@_); $self->SUPER::_set(column => 'changed_on', value => defer {DateTime->now}); $self->SUPER::_set(column => 'from', value => Jifty->web->request->remote_host. " / ". Jifty->web->current_user->user_object->name ); return ($val, $msg); } Limit access to pages to logged-in users The best place to do this is probably in your application's Dispatcher. If, for example, you wanted to limit access to /secret to logged-in users, you could write: before qr'^/secret' => run { unless(Jifty->web->current_user->id) { Jifty->web->tangent(url => '/login'); } }; Then, in your login form component, you would write something like: <% Jifty->web->return(to => '/', submit => $login_action) $> The combination of the tangent and return will cause the user to be returned to wherever they came from. See Jifty::Continuation for more information. If you want model-level access control, Jifty provides a ready-built ACL system for its models; See Jifty::Manual::AccessControl for details. Finally, you can also allow or deny specific actions in the dispatcher, to limit who is able to perform what actions -- see Jifty::API. Run my Jifty app as FastCGI in Apache/Lighttpd ? Jifty provides a really simple way to run the application as a FastCGI server. The complete instructions and examples are in jifty help FastCGI for both Apache servers and Lighttpd servers. (Please cd to your app directory before running this command.) You'll have to install CGI::Fast and FCGI module for this. Take actions based on data in URLs You can add actions to the request based on data in URLs, or anything else, using Jifty::Request::add_action. For example, suppose you wanted to make the path /logout log the user out, and redirect them to the home page. You could write: before '/logout' => { Jifty->web->request->add_action( class => 'Logout' ); Jifty->web->request->add_action( class => 'Redirect', arguments => { url => '/' }); }; Pass HTML form input directly to components Sometimes you don't want to take an action based on input from HTML forms, but just want to change how the page is displayed, or do something similarly transient. Jifty::Action is great, but it doesn't have to be the answer to everything. For cases like this, it's fine to use typical HTML <input>s. Their values will be accessible as request arguments, so you can fetch them with get in the dispatcher, and they will be passed as arguments to top-level Mason components that list them in <%args>. And don't worry about namespace conflicts with Jifty's auto-generated argument fields -- Jifty prefixes all its names with J: so there won't be a problem. Perform database migration Edit etc/config.yml and change Database->Version to a proper value (say, 0.0.2). Then run jifty schema --setup Jifty would inspect the current database and perform proper actions. You could give a jifty schema --setup --print Use different table names than the ones Jifty automatically creates In YourApp::Record, define a _guess_table_name sub that doesn't pluralise or pluralises differently. Perform ajax canonicalization on a given field ? Asking user to input something in a form is really common in a web app. For some certain form fields you want them to have a certain normalized/canonicalized form in the database, and you could do an ajax canonicalization in Jifty very easily. Let's say your User model needs a canonicalized username field to make sure those names are in lowercase. All you have to do is to define a method named canonicalize_username in your Model class, like this: package MyApp::Model::User; use base qw(MyApp::Record); sub canonicalize_username { my $class = shift; my $value = shift; return lc($value); } If the form is generated by a Jifty::Action::Record-based action (all those autogenerated CRUD actions), then this is all you need to do. And that is probably 90% of cases. Jifty::Action::Record would check if there is a method named like canonicalize_fieldname when it is rendering form fields. If found, related javascript code is generated. You do not have to modify any code in your view. Jifty does it for you. The ajax canonicalization happens when the input focus leaves that field. You would see the effect a bit later than the value in the field is changed. Of course, you can apply the same trick to your own Action classes. You can use the canonicalization to change data in other fields. For example you might want to update the postcode field when the suburb field is changed. $self->argument_value( other_field => "new value" ) Use iepngfix.htc to add PNG support in IE5.5+ Jifty has included iepngfix.htc by Angus Turnbull. The HTC file will automatically add PNG support to IMG elements and also supports any element with a "background-image" CSS property. If you want to use this fix, please include this one line in your CSS file, with tag names to which you want the script applied: img, div { behavior: url(/static/js/iepngfix.htc) } Alternatively, you can specify that this will apply to all tags like so: * { behavior: url(/static/js/iepngfix.htc) } Check details from Angus himself. ( ) Render model refers_to field as a select widget with meaningful display name See Jifty::Record for brief_description method. Sometimes you need to render a column which is using refers_to to other model. but you want not to display unique id of the entries , but meaningful display name instead. use Jifty::DBI::Schema; use MyApp::Record schema { column colors => refers_to MyApp::Model::Color; }; you can implement a name method in ModelColor: package MyApp::Model::Color; use Jifty::DBI::Schema; use MyApp::Record schema { column color_name => type is 'varchar'; }; sub name { my $self = shift; return $self->color_name; } so that, when you render an field which refers to MyApp::Model::Color , it will render a select widget with the mapping color names instead the unique id for you. Create mutually dependent models Sometimes you need two tables that both depend upon each other. That is, you have model A that needs to have a column pointing to Model B and a column in Model B pointing back to model A. The solution is very straight forward, just make sure you setup the base class before you load dependent model and this will just work. For example: package ModelA; use base qw/ MyApp::Record /; # Note that "use base..." comes first use ModelB; use Jifty::DBI::Schema; use MyApp::Record schema { column b_record => refers_to ModelB; }; package ModelB; use base qw/ MyApp::Record /; # Note that "use base..." comes first use ModelA; use Jifty::DBI::Schema; use MyApp::Record schema { column a_record => refers_to ModelA; }; Everything should work as expected. Reuse Jifty models and actions outside of a Jifty app use lib '/path/to/MyApp/lib'; use Jifty; BEGIN { Jifty->new(no_request => 1); } use MyApp::Model::Foo; use MyApp::Action::FrobFoo; From there you can use the model and action to access your data and run your actions like you normally would. If you've actually installed your app into @INC, you can skip the use lib line. Send out dynamically created binary data In a Template::Declare view, do something like this: template 'image' => sub { # ... # create dynamic $image, for example using Chart::Clicker Jifty->web->response->content_type('image/png'); Jifty->web->out($image); }; Create a many-to-many relationship You need to create two one-to-many relationships with a linking table as you normally would in pure SQL. First, create your linking table by running: bin/jifty model --name LinkTable Modify the newly created MyApp::Model::LinkTable class to add new columns linking back to either side of the table: use MyApp::Record schema { column left_table => refers_to MyApp::Model::LeftTable; column right_table => refers_to MyApp::Model::RightTable; }; Then create links to the linking table in MyApp::Model::LeftTable: use MyApp::Record schema { # other columns... column right_things => refers_to MyApp::Model::LinkTableCollection by 'left_table'; }; Then create links to the linking table in MyApp::Model::RightTable: use MyApp::Record schema { # other columns... column left_things => refers_to MyApp::Model::LinkTableCollection by 'right_table'; }; Now, add your records. To create a relationship between a row the two tables: my $left = MyApp::Model::LeftTable->new; $left->load(1); my $right = MyApp::Model::RightTable->new; $right->load(1); my $link = MyApp::Model::LinkTable->new; $link->create( left_table => $left, right_table => $right, ); And to get all the "right things" from the left table, you need to make the extra hop in your loop: my $links = $left->right_things; while (my $link = $links->next) { my $right = $link->right_table; # Do stuff with $right } Show login box on an action submit In your application's dispatcher add the following: before '*' => run { # do nothing if user is logged in return if Jifty->web->current_user->id; # check all actions the request has. if at least one require login # then save them in a continuation and redirect to the login page tangent '/login' if grep $_->can('require_login') && $_->require_login, map $_->class, Jifty->web->request->actions; }; All you have to do now is to add sub require_login { return 1 } into actions which need this functionality. Note that you can implement complex logic in the require_login method, but it's called as class method what set a lot of limitations. That would be really cool to have access to all data of the action in this method, so you are welcome to post a better solution. Append a new region based upon the result of the last action using AJAX In the Administration Interface, you can create new items. You enter the information and then the newly created item is appended to the end of the list immediately without reloading the page. You can use this recipe to do something like this, or to modify the page however you need based upon the result of any server-side action. Render your action fields as you normally would. The key to the process is in the submit button. Here's how the Jifty::View::Declare::CRUD does this, as of this writing: Jifty->web->form->submit( label => 'Create', onclick => [ { submit => $create }, { refresh_self => 1 }, { element => Jifty->web->current_region->parent->get_element( 'div.list'), append => $self->fragment_for('view'), args => { object_type => $object_type, id => { result_of => $create, name => 'id' }, }, }, ] ); This could is embedded in a call to outs() for use with Template::Declare templating, but you could just as easily wrap the line above in <% %> for use with Mason templates. The keys is each item in the list past to onclick: { submit => $create }, This tells Jifty submit the form elements related to the action referenced by $create only. Any other actions in the same form will be ignored. { refresh_self => 1 }, This tells the browser to refresh the current region (which will be the one containing the current submit button), so that the form can be reused. You could also modify this behavior to delete the region, if you wrote: { delete => Jifty->web->current_region }, The most complicated part is the most important: { element => Jifty->web->current_region->parent->get_element( 'div.list'), append => $self->fragment_for('view'), args => { object_type => $object_type, id => { result_of => $create, name => 'id' }, }, }, - element The elementparameter tells the browser where to insert the new item. By using Jifty->web->current_region->parent->get_element('div.list'), the new code will be appended to the first divtag found with a listclass within the parent region. This assumes that you have added such an element to the parent region. You could look up an arbitrary region using Jifty->web->get_region('fully-qualified-region-name')if you don't want to use the parent of the current region. - append The appendargument gives the path to the URL of the item to insert. By using append, you are telling Jifty to add your new code to the end of the element given in element. If you want to add it to the beginning, you can use prependinstead. - args Last, but not least, you need to send arguments to the URL related to the action being performed. These can be anything you need for the your template to render the required code. In this example, two arguments are passed: object_typeand id. In the case of object_typea known value is passed. In the case of id, the result of the action is passed, which is the key to the whole deal: id => { result_of => $create, name => 'id' }, This line tells Jifty that you want to set the "id" parameter sent to the URL given in append, to the "id" set when $createis executed. That is, after running the action, Jifty will contact the URL and effectively perform: set id => $create->result->content('id'); It's a lot more complicated than that in actuality, but Jifty takes care of all the nasty details. If you want to use a custom action other than the built-in create and want to pass something back other than the "id", you just need to set the result into the appropriate key on the content method of the Jifty::Result. For more details on how you can customize this further, see Jifty::Manual::PageRegions, Jifty::Web::Form::Element, Jifty::Result, Jifty::Action, Jifty::Web::PageRegion, Jifty::Web, and Jifty::Request::Mapper.
https://metacpan.org/pod/Jifty::Manual::Cookbook
CC-MAIN-2016-40
refinedweb
3,077
50.26
Hello, In this article, i would just explain the overview of the TPL in .NET 4 which will just get you started in understanding about it. TPL stands for Task Parallel Library. This library was introduced in .NET 4 framework. In today’s world, the devices on which our code runs are powerful yet compact. These machines do come with multi processors or with multi cores. So to harness the power of these machines, we need to code more efficiently with smartness. In .NET world we already had threads so far all these years. So why do we need TPL any way? Well, we needed much simpler coding interface with in the language such a way that our code (multi-threaded) looks easier to read. Also the objects created (threads) was not light weight. And to solve the problem of harnessing concurrency in a better way than we used to do. Earlier, one has to have an in depth strong knowledge on multi-threading concepts supported by the language also on the machine architecture level to code better utilizing the complete power of the architecture. Before TPL, it was really hard to write code which was evenly distributed across processors or cores. Also it was really bad to meddle with the OS about assigning jobs equally to processors/cores, but many times things would go chaos. Hence TPL enabled developers write code in a new way, which is much better than before. TPL helped in achieving much better results with same hardware’s with cleaner, simpler code being written. So basically you can think TPL is just a wrapper library for System.Threading APIs. But, with much more intelligence added. Hence TPL has eased the interface of writing multi-threading code in .NET lately. So, ill show you some basic code in traditional vs TPL API! Below is a small example, where in you need to check the status of your service running in 5 different machine in your LAN from your application. So how do we code? Traditional way: Now this way of coding, its not pretty clear whether your code is really optimized and is exploiting the cores of your processor efficiently also light weight w.r.t threads. To make sure, you need to write more code in getting number of cores/processors you have in your machine and which core/processor is free and assign job to it and get it done. Basically, the developer has to write more algorithms/intelligent code to exploit all the processor and get the best out of it. Thankfully, TPL solves this overhead. So how you code in TPL you ask? To use TPL APIs, we use System.Thread.Tasks namespace. This namespace has many APIs to use. One such we use now to solve above problem is using Parallel.For See, how simple and easier the code is. As said above, the TPL in the framework is smart enough to distribute the work evenly on the powerful machines which the developers need not to worry about also it is much light weight objects and uses efficient ThreadPool internally than traditional ThreadPools. In the above code,i have used anonymous delegate via lambda expression as third argument. This is because, Parallel.For API accepts Action<T> delegate, hence internally the compiler converts your anonymous delegate to an delegate like in IL as shown in image below: You have to note that TPL wont work efficient if the machine hardware doesn’t support multiple cores/processors. So thus running a TPL code on a single core/single processor machine will execute all tasks sequentially. The beauty about the TPL unlike traditional Threading API is, each time you wish to create a task, a new Thread isnt created on the workerthread pool. Rather, an existing Task object is pulled out of the TaskFactory pool, thus improving the overall efficiency. This ThreadPool in TPL is smart enough to figure itself when it has to create a new object in its pool when it detects all the pool objects are getting used up. So every thing is happening behind the scenes. Thus, TPL helps developer breath calm. If you take alook at the above Parallel.For code, we are not explicitly using any ThreadPool. But rather the CLR internally uses task scheduler and pool objects to get the job done, but i must warn you that the tasks sequence will not be in order and neither you can control. This is because, in TPL each Task is defined as an independent unit of work and each Task could have more Threads in it. As i already said, TPL is just a syntactic sugar provided in the framework. It wont add itself to Base Class Libraries. Because internally all existing concepts viz ThreadPool, WorkerThreads, etc. are getting used, but in a much efficient way. Okey, so the above code shows a better way of starting a task in a loop. But in our daily coding life, we don’t get the same problem to solve or in other words its not enough to solve all the problem. So we need to control/handle tasks individually on our way. So how do we do it, you ask? Well see below code. We will use Task class for that. Task class provides various APIs to get the job done easily and cleaner. So as you can see from above code, its much simpler and nicer and all spaghetti algorithmic logic is left to the framework to take care for me. Now if you take a closer look at the above code, we are using Task.Factory. This is because as already said Task internally uses ThreadPool objects to get the job done. Now you might wonder how to start, well as soon as you call StartNew() on the Task API, the tasks gets started in the background. You may ask, can you postpone later i.e starting task. Yes you can, but not recommended. But to show you, Now, as you can see from above code. The Task internally uses Factory pool. So not explicitly specified. But, trick comes when you have to choose TaskScheduler to do your job. TaskScheduler provides 2 properties Current and Default so these two acts differently in a different way. Hence it is recommended that to use Task.Factory.StartNew rather than this. You can find a bit more indepth about StartNew vs Task() by TPL team engineer blog: Now if you remember, using traditional Threading APIs from Thread class, you were able to assign your own name to the Threads being created in your code. But in TPL’s Task style, this is not possible. Since it uses every thing from pool. But it rather does gives us ID which is unique to each task. So you can get Task.CurrentId value from the task object. Now lets get under the hood and see whats really happening at the IL level. The same Task line will be converted to Task t = Task.Factory.StartNew(new Action(MainClass.MyTask)); by the compiler. So upon disassembling the code, it looks like in the image: Do not worry if you don’t understand any thing in above image. Its the Intermediate Language which every .NET variant language converts high level language into. So as you can see in line 15, Task.TaskFactory is a property.. Hence the compiler first calls get_Factory() method internally Why?, “because we know, at IL all properties i.e set and get accessors are converted to equivalent method’s”. At line 17 and 18, it loads MyTask() method via Action delegate. Then it calls StartNew method using this delegate in line 19 to start the Task execution. Well that’s all friends for now. There’s no use writing more in depth since it would be same as others have written on web. To dig deep into TPL, you can read excellent article by sacha barber here: Enjoy 🙂
https://adventurouszen.wordpress.com/2011/09/28/overview-on-task-parallel-library/
CC-MAIN-2018-05
refinedweb
1,328
74.49
Ticket #611 (closed defect: fixed) Schema Validation is not working properly on CompoundWidgets Description I'll take a look later in more detail, meanwhile I'll attach a patch with a failing test... Got to go and can't give in more details right now... test speaks better :) Adeu P.S. I'm beginning to think we *might* need full qualification of form names (so schemas can be attached to forms) and recursive update of error dicts (as the schemas error dict could be nested so it would need to get merged with the nested dict produced by _validate_children) Attachments Change History Changed 11 years ago by alberto - attachment nested_schema_failing.patch added comment:1 Changed 11 years ago by michele I will take a quick look now but unitl Friday I won't have time (exam coming) anyway I think we can get along without fully qualified names, keep in mynd that we already have them but the form one is skipped. :P comment:2 Changed 11 years ago by michele Now I have to go sleep, anyway the problem is just in the _retrieve_dict_from_parent method the errors dict is already created in the right way and recursively, I have some small fixes here but they aren't ready yet I will look into it on Friday afternoon probably. The problem is that we should always return the right dict from the _retrieve_dict_from_parent method, but this is not happening for a fieldset since the last chain is avoided, we should instead walk the entire tree and return the right dict if present and otherwise the outermost dict that contains the name key for simple fields. This should make all work. comment:3 Changed 11 years ago by michele I think I've fixed the problem by fixing (as I said) retrieve_dict_from_parent, the browser test show that all is working right using your form of the test. You should fix your test that's not passing since you're asserting not errors, but keep in mynd that you're passing a dict that contains only a key, this will give a missing value error with the Schema validator so errors is not empty. I'm going to attach the patch, give me any feedback. Changed 11 years ago by michele - attachment fix-for-schema.patch added The fix, now _retrieve_dict_from_parent always returns the right dict for the actual parent (a dict if the widget is a compound) the outermost dict that contain our key if the widget is simple, hope the logic is implemented in the right way comment:4 Changed 11 years ago by alberto - Cc michele removed Great! I've committed you last patch at r824 and updated the tests. It seems to be working fine except for the case of mixed validators which I haven't tested yet. Specially when a Schema validatior has nested Schema validators (not the same as nested widgets having validators). I've got a pile of work so I might not be able to get them today. Hey! Forget about all this until you finish your exam! ;) Many thanks! Alberto P.S. Keeping this open until I get to test the nested schema stuff comment:6 Changed 11 years ago by alberto BTW, I forgot to comment that the test I sent was broken, it would alway fail as it was implemented (corrected at the commit) because the Schema needed to have ignore_missing=True so it wouldn't bark when some fields weren't set. Sorry for any time you might have wasted with this :( Thanks again, Alberto comment:7 Changed 11 years ago by michele Great, yep after a while I noticed the missing thing as I said, anyway no problem. :-) Nested schema? yep, I also wondered if they will work, and I think (and hope) they will work, anyway only your test will say it for sure! :D comment:8 Changed 11 years ago by michele comment:9 Changed 11 years ago by alberto Mmmm I'm not so sure that nested schemas will work :( (but shouldn't be talking without a test, though) because a nested schema validator can return a tree aswell, lets say we phone = validated by it's own widget age = Int() As you can see, the schema validates the deep buried age and the phone number is validated by it's own widget. The schema validator will only run when the form is validated AFTER the child widgets are validated and will produce a nested dict with the Invalid exception at one of it's leafs. What I don't think will work as expected is that both of this error dicts (the one produced by validate_children and the one produced by the schema) get merged so we see both errors: on phone and on age. But, as i've said, I'm just speculating... :) I *do* hope it works too, or else we'll have to resort to the "turtlish" recursive_update() of one of my patches at #577 :( Regarding [name]: it's just so the widget's repr prints something like TableForm?(name="form",...) and it's easier to spot which widget is being processed when printing self while debugging. It used to do this before you removed "name" from the Widget's template_vars so I added it back here. Regards, Alberto comment:10 Changed 11 years ago by alberto Oh my god! that's what happends when you don't preview what you blabber...: should age = Int() phone = no validator, already validated by it's widget comment:11 Changed 11 years ago by michele Hi Alberto, I never used nested Schema, anyway we do want both errors to be merged on the same error dict, I will wait for your test and take a look tomorrow but I'm pretty confident that if it doesn't work we can easily fix it. Regarding name, all ok didn't notice this was under repr, sorry. ;-) comment:12 Changed 11 years ago by alberto comment:13 Changed 11 years ago by alberto I think we could solve this without merging both dicts at each node by merging them at the end of the tree walk. We could accomplish this by changing the "parent" paramteter that goes into each validate by a "state" object (+or- like formencode's) which stored the parent (as it's done now) and a parallel schema errors dict. These two dicts can be recursively merged only once at the end of the walk. What do you think? comment:14 Changed 11 years ago by michele Fixed... the test. :P I tried the same thing yesterday and it worked on the FF test. The problem: you're asserting an error on the age field, but you haven't defined a validator for it not by using a schema or a validator directly. my nosetests output: michele@ionic:~/Progetti/TurboGears/svn/turbogears/widgets$ nosetests ........................................................... ---------------------------------------------------------------------- Ran 59 tests in 2.523s OK michele@ionic:~/Progetti/TurboGears/svn/turbogears/widgets$ that was easy. :P But probably I'm missing something? comment:15 Changed 11 years ago by michele The reason why errors are not getting lost is that we are merging them node by node, doing it at the end will lost everything because if some keys are nested dict they will endup to not being updated (like a dict with it's update method) but just replaced. comment:16 Changed 11 years ago by alberto Maybe I should write some tests for the tests too :D Why the fk do I want to validate a name as an Int()?? Anyway, I'm committing the fix plus a test that tests what I really wanted to test (test, test, test ;). That is, nested Schemas (not nested widgets with mixed validators, which isn't bad to test anyway....). Haven't looked at the cause yet but seems like the error dict from the schema is not passed along but a Invalid instance is, should be easy to fix. Regarding the merging without losing keys it's what I implemented with recursive_update() at a patch back at #577, I was doing it at every node which is a waste as if two parallel dicts are built we can merge the only once and at the end. im not sure this is needed yet as this problem hasn't arised yet, as I've said, there's something wrong besides this... BTW, maybe I'm pushing this too far... do really want to support nested schemas?? Maybe from a consistency POV yes, but from a real use-case POV? (I have none, personally) comment:17 Changed 11 years ago by simon Nested schemas can be usefull from reusability stand point. One can define some elemental schemas and than composes everything from that. comment:18 Changed 11 years ago by alberto I've committed at r847 some progress on this. It seems that nested error dicts from the widgets are effectively not merged with the nested dict from the schema... I had to build an adapter function to translate the error dict formencode.Schema returns from nested schemas to the one we're used to. I've also seen that value dicts should be merged too. Now the question is, how can we implement this in the most effective way? It would be a waste to recursively update at every node on the way back to the root as the the lower levels will be already updated. But we need to take into account too that nested schemas can be arbitrariliy nested. The best way that comes to my head is to build 4 separate dicts (widget values, widget errors, schema values and schema_errors) and merge them at the end into values and errors. To do this we'll need to do something like the "state" variable I mentioned. Anyway, I hope you can come up with something easier like you've been doing in the previous patches, this 4 dict thing sounds scary... ;) I might give a try later and write some POC... Regards, Alberto Simon: I also have a feeling that nested schemas should be supported... but from a consistency POV, hadn't thought of the reusability stand point but sure makes sense. comment:19 Changed 11 years ago by alberto comment:20 Changed 11 years ago by alberto comment:21 Changed 11 years ago by Ooops comment:22 Changed 11 years ago by michele Well, I tested yesterday nested shema and noticed they will not work, since they error dict we get back from formencode is wrong and not nested, IMHO we shouldn't workaround it in TG but ask Ian and fix it in formencode IMHO. But thinking a bit about it yesterday and now, I do think we are pushing this thing a bit too much forward, it doesn't make sense supporting nested schema because that's not DRY, at this point we can just trash away the widget own validator and provide it only for compound widgets by supporting a schema, why would someone want to do this thing: class ContactSchema(Schema): name = String() age = Int() email = Email() class ContactFields(WidgetsDeclaration): name = TextField() age = TextField() email = TextField() 8 lines instead of: class ContactFields(WidgetsDeclaration): name = TextField(validator=String()) age = TextField(validator=Int()) email = TextField(validator=Email()) 4 lines Schema are good, but just for things like checking two password are corresponding, in the widget case, IMHO widgets should provide one good (and always working right) way of doing this thing and not fifty thousand different way that we need to support doing, I don't even want to encourage something like this, moreover as you can see nested schema are a pain for us to support (for value dict and errors dict) and can have bad impacts on performance ATM, and I even noticed that Schema are already slow them self (try them on the python shell). Sorry but this is really an unneeded complication IMHO ( 4 dict, a state value, recursive update). I would like to hear kevin opinion on this and in case work on it on a ticket for finding the best solution before submitting anything to svn. comment:23 Changed 11 years ago by michele Anyway I will give a try to find a simpler solution (if any exists) just for fun... :D I had in mind something yesterday if the error dict was right, I could try something using your adapter function, but we should push this fix on formencode because is really wrong to get back a flat dict... But not before monday probably (I don't think I can come up with something good in 20minutes...). :/ comment:24 Changed 11 years ago by michele For the moment I think we should revert any change on svn for handling nested schema, just because we should not have failing tests on svn and to discuss the best eventual solution on this ticket before submitting it on svn. comment:25 Changed 11 years ago by michele Ok, I still don't like this thing :P anyway I have the solution sitting here in my mind. At 99,9% I'm sure it will work with just 4/5 lines of code added (and the formencode dict adaption function until we fix this in formencode where it belongs). I will try to post a patch tomorrow probably later than sooner as I will be away. Have a nice weekend. comment:26 Changed 11 years ago by alberto r850 reverts the changes, I'm attaching a patch for them which probably wont apply clean since I haven're reverted the changes at r844 made by ronald, should be alright if those hunks get removed. All tests were passing though, but you're are right in that it should have been discussed further. Just though that as 0.9a1 is out the trunk wouldn't be so touchy, I could have been wrong... Anyway. The 4 dicts proposal was an optimization idea, to postpone the merging of the dicts until the last possible moment. I do think that this should be supported if we're going to support Schemas altogether. just because schemas *could* have nested schemas and it would be disturbing to see that they don't work as expected, or even worse, than they hide validation errors from other widgets Schemas are good for other things besides chained validators in any semi-complex form, for example, you're able to conditonally validate certain fields (password & passford_confirm) depending on the value of other fields (a change_password flag) thanks to the state variable. This is a real use-case. As I've said, my strongest argument is for consistency's sake, so Schema validators behave as schema validators (that is, they can contain arbitrarily nested schemas). Anyway, here's the patch... feel free to improve in any way you like as you're more knowledgeable of the widget internals ;) it should be Kevin's decision wether this gets in or not. All the tests pass, just needs some optimization at the merging of dicts, most of the bulk are tests so don't panic ;), the CompundWidget? is only touched in two lines and two extra utility functions. Regarding formencode's patching I'm not so sure about that, I *think* this is FormEncode's behaviour and might be for a good reason (don't know who it integrates in other framework so cannot say for sure). I think the correct thing to do is to adapt it to our nested error dicts Have a nice weekend too ;) Alberto Changed 11 years ago by alberto - attachment nested_schemas.patch added the patch. Should apply cleanly as I've removed ronald's hunk Changed 11 years ago by alberto - attachment nested_schemas.2.patch added last patch was broken, this is the good one comment:27 Changed 11 years ago by simon I think recursion is the clearest way. We can always optimise later if performance becomes an issue (if need be even in C). comment:28 Changed 11 years ago by alberto Hi, I'm attaching an optimized version. This one only incurs in the overhead of recursively updating dicts if we need to merge a Schema (possibly) nested error dict. It adds a flag parameter to _set_errors() and _update_error_dict_from_parent(). This means that if no Schema validators are used the overhead is minimal (only an extra if clause). Regarding the optimized C version, I was surprised I didn't find any built in function or external library that implemented this kind of functionality (I haven't looked really so no wonder ;). I guess that recursively updating nested dicts should be a fairly common necessity... If anyone knows about one please tell me :) Alberto Changed 11 years ago by alberto - attachment nested_schemas.3.patch added optimized version Changed 11 years ago by alberto - attachment nested_schemas.4.patch added v3 with stricter tests to check that errors from one level don't polute other levels comment:29 Changed 11 years ago by michele Hi all, First of all, excuse me if I sounded a bit harsh yesterday but I was coming back from a nervous football match, I'm really sorry! :-( Alberto, I also think as you and Simon said that for doing this thing the only solution is using a recursive function, your last patch is very similar to what I had in mind and I will build my patch upon it since I was going to also use a flag as you have done (what about modifying the adapter function to return a flag if the error dict is nested? that's the only case we should really cover, right?). And also sorry for the svn/test thing, I hadn't tried the latest version yet when commenting (I was at 846) and didn't notice test were passing. Finally, I do think we should support also this type of validation schema since people have different opinion and someone will need it for sure at some point, it doesn't matter if I like or not it. :D Sorry again, I will try my approach ASAP. Ciao! Changed 11 years ago by michele - attachment michele-nested-schemas-1.patch added patch comment:30 Changed 11 years ago by michele Ok, I've posted my patch. I also renamed some functions and implemented the nested "signal" by the adapt function. The trick I used is to recursively pass to every update_dict_by_parent a computed parent and it's associated dict, at the end of every recursion the dict is no more nested and hence treated as any other error dict by our update_dict_by_parent function. In other words I'm just simulating a children validation by iterating the nested error dict and taking advantage of the logic we already implemented in the update_dict_by_parent function. What do you think guys? any suggestion for the keys_to_remove list? I use it since you can't delete a dict key while you're iterating over it. comment:31 Changed 11 years ago by alberto Hi Michele, No worry, we all know how excited Italians get over Calcio ;) ... I've looked at your patch and looks very similar to mine. You've moved the nested-detection flag to adapt_schema_error_dict, which will optimize the most common case (I think) which is when schema error dicts are not nested, and you've removed the tail recursion from recursive_update by unrolling that logic at the end of set_errors. However, this time I like my patch better :) I think recursive_update is more clear, as most recursive algorithms are, and, being a separate function, easier to optimize if the need arises. However, it's normal that I see it clearer as I've wrote it :) I propose to use your adapt_schema_error dict and my recursive_update (probably unbinding it from the widget for easier reusal). Whatever we do I think should get done ASAP so we can move to something else as we're spending too much time on this :) I'll post a patch in 3 seconds ;) Regards, Alberto Changed 11 years ago by alberto - attachment nested_schemas.5.patch added how about this? comment:32 Changed 11 years ago by simon What the hell, I'll jump into the fray as well ;) Optimisation (haven't actually timed it, at the very least it's less code ;)) of Michele's code: def _update_dict_by_parent(self, orig_dict, new_dict, parent, nested=False): # Avoid any computation if new_dict is empty if new_dict: temp_dict = orig_dict for name, skip in itertools.ifilterfalse(itemgetter(1), parent): if not temp_dict.has_key(name): temp_dict[name] = {} temp_dict = temp_dict[name] if nested: for (key, value) in filter(lambda (key, value): isinstance(value, dict), new_dict.iteritems()): parent.append((key, False)) self._update_dict_by_parent(orig_dict, value, parent, True) del new_dict[key] temp_dict.update(new_dict) itemgetter is a Py2.4 thing, so it would have to be manually implemented for 2.3, but that's trivial. comment:33 Changed 11 years ago by simon For the record, I'm still in favour of a straight-forward recursion. comment:34 Changed 11 years ago by michele Hi all, yes we (Italians) get too much excited with Calcio probably, I shouldn't hack right after a football game, sorry again. :D Well back to this issue, your is cleaner for sure, I don't have any problem with it. I was just wondering if it cover all use case, the reason I used the update_dict_by_parent is that in this way I'm sure that while updating recursively we don't loose any other nested dict already present (by leveraing the same logic we use for every other update). As I said my way just simulates a sequential children validation by flattering backward the nested dict and providing the right parent to use, if just feels more coherent to what we are already doing, but as you said they are quite similar. Anyway I'm not sure if we can experience this corner case... I can also avoid the keys_to_remove list, I can post an updated patch in 1 hour. Anyway do what you think is the best, whitin 1 hour I can probably test if the problem I mentioned can arise (calling recursively_update on a dict that's not empty). comment:35 Changed 11 years ago by alberto Hi all, The recursive_update should handle empty dicts properly, it's just a regular (slow) update(): for k, v in d.iteritems(from): to[k] = v + a recursive call on itself to handle the case when we're updating a dict with another dict. It can even be used with any two dicts that need to be recursively updated, no particular dependency on error dicts. But of course, you're free to test it just to make sure (though I'm pretty confident as, surprsingly enough, I found an almost identical version when looking for something to get this job done faster). Anyway, I really don't care which version gets in as long as it passes the tests and it's the most efficient and clearer solution, which, in this case, i think mine is ;) (at least clearer, and efficency-wise more or less the same). But I have no problem conceding if there is a better choice, which in other cases, you're solution was obviously simpler and more elegant. Simon, you're always welcome to the fray, the more people involved the better :) Let's please settle this down and close this ticket and move to something else, this is getting boring :) Regards, Alberto comment:36 Changed 11 years ago by michele Ok, I checked the corner case I was talking about and it's being handled in the right way, feel free to commit your version. Just for the sake of completeness, the corner case I'm talking about is this (not a empty dict but a not empty dict): There are situation where update_errors_dict_from_parent is called with a base_error_dict that already contains a series of nested errors generated from children validation (for example you can check this by removing if_empty=None from int_validator), my solution is to call update_errors_dict_from_parent for every key like this and passing a constructed parent, this case is then handled by just using the python built-in update function of a dict (temp_dict.update(errors_dict)) and the initial logic that you can see in the update_errors_dict_from_parent function. Your solution also works since you're always iterating every single key of the dict and every nested dict recursively and replacing it, you're just using a customized update function for a dict. At this point the only difference that I can see could reside in the two update functions being used, yours is in python, the python one (that I'm using) is implemented in python or CPython and hence already optimized? Anyway that's not a big deal... I would still like to rename the two functions as I've done in my first patch tough: _retrieve_dict_from_parent -> _get_dict_by_parent _update_errors_dict_from_parent -> _update_dict_by_parent (with renaming of base_errors_dict to orig_dict and errors_dict to new_dict) What do you think? Sorry again. Ciao! comment:37 Changed 11 years ago by alberto Ok,I'll rename those funcs and variables and commit it sometime this afternoon, now I'm going to have lunch :) I'll keep the recursive_update as I really find it much clearer, and can also be used in other dicts if needed. I think this solution also respects you're original algorithm structure better (was one of my design goals), it really only applies recursive update in case we're expecting a nested_error dict from the schema. So that said, if this solution is ok with you (both) just leave me unreplied and I'll close this later when I get back. Thanks both for the help! Alberto comment:38 Changed 11 years ago by alberto - Status changed from new to closed - Resolution set to fixed comment:39 Changed 11 years ago by simon Why not move recursive_update to util, as it's completely generic and may be of use elsewhere? comment:40 Changed 11 years ago by alberto comment:41 Changed 11 years ago by michele Great work Alberto, I think that now widgets support any schema/validator combination! ;) comment:42 Changed 11 years ago by alberto Im still not sure... Maybe we should reopen this? :D
http://trac.turbogears.org/ticket/611
CC-MAIN-2016-40
refinedweb
4,385
63.22
Sometimes we have to split String to array based on delimiters or some regular expression. For example, reading a CSV file line and parsing them to get all the data to a String array. In this tutorial, we will learn how to convert String to Array in Java program. String to Array in Java String class split(String regex) can be used to convert String to array in java. If you are working with java regular expression, you can also use Pattern class split(String regex) method. Let’s see how to convert String to an array with a simple java class example. package com.journaldev.util; import java.util.Arrays; import java.util.regex.Pattern; public class StringToArrayExample { /** * This class shows how to convert String to String Array in Java * @param args */ public static void main(String[] args) { String line = "My name is Pankaj"; //using String split function String[] words = line.split(" "); System.out.println(Arrays.toString(words)); //using java.util.regex Pattern Pattern pattern = Pattern.compile(" "); words = pattern.split(line); System.out.println(Arrays.toString(words)); } } The output of the above program is: [My, name, is, Pankaj] [My, name, is, Pankaj] Convert String To Array in Java. I appreciate your work , thanks for all the informative blog posts.
https://www.journaldev.com/776/string-to-array-java
CC-MAIN-2021-04
refinedweb
210
58.18
Button Pressing Example Button Pressing Example  ... with the button. That is a different action takes place on the click of each button. You.... Moreover, if you don't to check that which button was selected then in that case Java AWT Package Example Java AWT Package Example  ... than one method. Button Pressing Example In the program code given... will learn how to create Button on frame the topic of Java AWT package Swing Button Example Swing Button Example Hi, How to create an example of Swing button in Java? Thanks Hi, Check the example at How to Create Button on Frame?. Thanks add button to the frame - Swing AWT for more information. button to the frame i want to add button at the bottom... JFrame implements ActionListener { JButton button = new JButton("Button Difference between documentation comment and multiline comment in java? Difference between documentation comment and multiline comment in java? Difference between documentation comment and multiline comment in java - Swing AWT , For solving the problem visit to : Thanks... market chart this code made using "AWT" . in this chart one textbox when user Java Dialogs - Swing AWT /springlayout.html... visit the following links: Dialogs a) I wish to design a frame whose layout mimics Create a Container in Java awt Create a Container in Java awt  ...;panel.add(new Button("Button 1")); panel.add(new Button("Button 2")); panel.add(new Button(" AWT Tutorials AWT Tutorials How can i create multiple labels using AWT???? Java Applet Example multiple labels 1)AppletExample.java: import...; Button b1; int num1,num2, sum = 0; public void init Radio Button in Java Radio Button in Java  .... The Checkbox class is used by the AWT to create both Radio Button and Checkbox.... Let us take a Example that helps you in Understanding to create a Radio Button Insert a Processing Instruction and a Comment Node Insert a Processing Instruction and a Comment Node This Example shows you how to Insert a Processing Node and Comment Node in a DOM document. JAXP (Java API for XML AWT code for popUpmenu - Swing AWT for more information. code for popUpmenu Respected Sir/Madam, I am writing a program in JAVA/AWT.My requirement is, a Form consists of a "TextBox" and a "Button COMMENT & HIDDEN COMMENT IN JSP COMMENT & HIDDEN COMMENT IN JSP In this section , we will learn how to implements comment & hidden comment in JSP. COMMENT : Comment generates a comment that is sent to the client. The comment use in JSP is very similar Java AWT Package Example how to set image in button using swing? - Swing AWT :// Thanks...how to set image in button using swing? how to set the image in button using swing? Hi friend, import java.awt.*; import Radio Button In Java Radio Button In Java  ... on the frame. The java AWT , top-level window, are represent by the CheckBoxGroup... for the procedure of inserting checkbox group on the Java AWT frame. Program Description: Here HTML5 comment tag Example, Definition of <!-- --> comment tag. HTML5 comment tag Example,, Definition of <!-- --> comment tag. In this tutorial, we will introduce you to the <!-- --> comment tag...; <!-- Comment area --> Example of <!-- --> all comment in jsp page in the browser. jsp scriptlet tag also support java comment...; // java comment /* multiple line java...all comment in jsp Defined all comment in jsp ? jsp How to Create Button on Frame a command button on the Java Awt Frame. There is a program for the best...; frame the topic of Java AWT package. In the section, from the generated output... How to Create Button - Swing AWT information, Thanks...java i want a program that accepts string from user in textfield1 and prints same string in textfield2 in awt hi, import java.awt. Java - Swing AWT Java Hi friend,read for more information, Event handling in Java AWT Event handling in Java AWT  ... events in java awt. Here, this is done through the java.awt.*; package of java. Events are the integral part of the java platform. You can see the concepts query - Swing AWT java swing awt thread query Hi, I am just looking for a simple example of Java Swing java - Swing AWT java how to use JTray in java give the answer with demonstration or example please list item* - Swing AWT information. Thanks...awt list item* how do i make an item inside my listitem...); choice.add("Java "); choice.add("Jsp"); choice.add("Servlets BorderLayout Example In java BorderLayout Example In java  ... BorderLayout in java awt package. The Border Layout is arranging and resizing components to set in five position which is used in this program. The java program uses Java AWT Java AWT How can the Checkbox class be used to create a radio button info button iphone example info button iphone example I'm looking for a simple info button example in iPhone XCode. Thanks button in java button in java how can i compare names of two buttons after...) { System.out.println("button clicked"); } } Thanks Hi.... i haven't learned that part yet,just a beginner.I'm using simple awt components java swing - Swing AWT : Thanks...java swing how to add image in JPanel in Swing? Hi Friend, Try the following code: import java.awt.*; import java.awt.image. Java - Swing AWT ....("Paint example frame") ; getContentPane().add(new JPaintPanel JavaScript Comment JavaScript Comment: coding to hide all those statements, which are not supposed to run. Comment make.... Example 1: <html> <head> <title>My Firt JavaScript File < more information,Examples and Tutorials on Swing,AWT visit to : java persistence example java persistence example java persistence example Java JButton Key Binding Example Java JButton Key Binding Example In this section we will discuss about how to bind a specific key to the button. Key binding feature is supported... and the corresponding keystroke is responded. Example Here I am going to give a simple java awt components - Java Beginners java awt components how to make the the button being active at a time..? ie two or more buttons gets activated by click at a time JFileChooser - Swing AWT ActionListener { JButton button; JFileChooser chooser; String choosertitle; public FileChooser(){ button = new JButton("Click me"); button.addActionListener(this); add(button); } public void JAVA7 : "Hello World" Example example of JAVA 7. Creating Java Program : Before writing java program we... example. If you have knowledge of C++ then you will easily understand java syntax.... These are - // This is first type java comment. /* This is another way of java comment Java Client Application example Java Client Application example Java Client Application example JList - Swing AWT is the method for that? You kindly explain with an example. Expecting solution as early... and the save button. class SaveListener implements ActionListener, DocumentListener... button; public SaveListener(JButton button) { this.button b+trees - Swing AWT b+trees i urgently need source code of b+trees in java(swings/frames).it is urgent.i also require its example implemented using any string...); tree = new JTree(com); JButton button = new JButton("Add Node"); JButton Example of HashSet class in java Example of HashSet class in java. In this part of tutorial, we... unique. You can not store duplicate value. Java hashset example. How.... Example of Hashset iterator method in java. Example of Hashset size() method awt in java awt in java using awt in java gui programming how to false the maximization property of a frame creating browse button - Java Beginners on Java visit to : Thanks...creating browse button how can we create a browse button along with a textfield in java swings. Hi friend, import Java AWT Java AWT What interface is extended by AWT event listeners tree - Swing AWT tree example of tree java program Hi Friend, Please visit the following code: Use of Comment Function in PHP keep you remembering about each details of the program. Example of PHP Comments...;; /* * This * is * called * Multiple * Line * comment*/ echo "This is not a comment"; ?> Output: This is not a comment Graphical calculator using AWT - Java Beginners (JTextField.RIGHT); m_displayField.setFont(BIGGER_FONT); //--- Clear button... button JButton clearButton = new JButton("Clear... Calculator example"); this.setResizable(false); }//end constructor Thread Life Cycle Example in java Thread Life Cycle Example in java In this section we will read about the life cycle example of Thread in Java. Life cycle of Thread explains you the various.... In this example we will create a Java class where we will create a Thread JTable - Cell selection - Swing AWT information. Thanks. Amardeep... clicked a button? Hi friend, import java.awt.*; import...); getContentPane().add(spane); JButton button = new JButton("Click Button Setting icon on the button in Java Setting icon on the button in Java  ... on the button in Java Swing. This program sets the icon on the button in Java...(Icon): Above method sets the specified Icon on the button. ImagIcon(String image Java AWT Java AWT What is meant by controls and what are different types of controls in AWT Java Swing : JButton Example Java Swing : JButton Example In this section we will discuss how to create button by using JButton in swing framework. JButton : JButton Class extends.... It is an implementation of a "push" button. It is similar to the simple button. You can create code - Swing AWT code i want example problem for menubar in swings Hi Friend, Please visit the following links: How to save data - Swing AWT to : Thanks... then in jList or or Jtable many data viewer in one button,then another button must java string comparison example java string comparison example how to use equals method in String... strings are not same. Description:-Here is an example of comparing two strings using equals() method. In the above example, we have declared two string Get Example Java Get Example  .... Java Get Examples.. Get host name in Java: This example... is method and how to use the get method in Java, this example is going Java Comparable Example Java Comparable Example I want to know the use of Comparable Interface. Please provide me one example Comparable interface is used.... Here is an example that compares two ages using Comparable Interface. import How to Hide Button using Java Swing How to Hide Button using Java Swing Hi, I just begin to learn java programming. How to hide the button in Java programming. Please anyone suggest or provide online example reference. Regards, hi, In java java - Swing AWT java i need to set the popup calender button in table column&also set the date in the same table column using core java-swing
http://www.roseindia.net/tutorialhelp/comment/94273
CC-MAIN-2014-52
refinedweb
1,746
58.38
Thinking About Item Renderers Many Alex, Once again I would just like to thank you for all the time your giving the community. It's one thing to know what you are doing and it's another to get the confidence you know what you are doing by seeing what the engineers are actually doing when 'they' extend the classes that they wrote. :) I don't think a lot of people on flex coders realize that you are a full time flex framework engineer and you still have time to give all this great advice and help out while still doing a great job programming our futures. Once again, thanks. Mike Mike, we really appreciate all the work you and other community leaders do. Every once in a while my schedule opens up enough for me to keep up with FlexCoders for a bit, but eventually I get buried again have to stop for awhile and I'm glad you guys somehow find the time to consistently be there for others. -Alex Posted by: Michael Schmalle | March 29, 2007 4:33 PM Thanks a lot for this great blog. It helps to understand the recycling and usage of item renderer components and also made me think about those column renderers... GREAT JOB! Posted by: Roman Gruhn | March 30, 2007 4:49 AM It looks like there are threading problems with some of the samples: I'm using the Text Styles sample , but with a bigger list that scrolls. It works fine, until you start using the scroll wheel, at which point the styles get all mixed up. I guess I'm a little surprised all the functions like getTextStyles() aren't implemented with parameters (like your styleFunction or labelFunction). ------------ Ha! Good point! I didn't follow my own instructions. The stylesFunction doesn't reset the styles. It should look more like this:: if (value { o.color = 0xFF0000; o.bold = false; o.underline = false; } else if (value > 45) { o.color = 0x00FF00; o.bold = true; o.underline = true; } else { o.color = 0; o.bold = false; o.underline = false; } I updated the sample code. I also had to add some code to make sure the styles got fetched when changing the renderers data property. However, I must note that this isn't a threading issue, simply one of not handling recycling as I mentioned in the blog post. There is no threading in ActionScript. And FWIW, getTextStyles doesn't take parameters because it is part of the style system, and not a replaceable function like stylesFunction or labelFunction. That's why we override it to call stylesFunction only when you need it. Otherwise you'd pay a performance cost for generality when you don't need it. -Alex Posted by: Andrew | April 1, 2007 8:48 AM Hi Alex, Thanks for these examples. It is nice to see the "recommended" way of doing these things, but I can't help but wonder why information this vital and widely applicable is showing up on an engineer's blog, 9 months after Flex was released, and only because he was lucky enough to have some free time. To me, this is the type of info that should have been featured in DevNet articles or whatever shortly after the launch. I follow numerous blogs and MXNA pretty vigilantly and I've never seen these approaches discussed or even mentioned. Anywhere. That seems extremely unfortunate, especially if the approaches people have come up with really do suffer from performance issues as much as you think they might. I think that could potentially reflect very poorly on the platform as a whole, when in fact it is just caused by developers not knowing any better. All gripes aside though, I look forward to examining these examples closer in the coming days. One initial question is how do we know when to use validateNow() versus validateProperties(), etc.? I was either somewhat or completely unaware of most of the validation and other methods you used to commit changes. Thanks, Ben --------------- Ben, I would have done this sooner if I had time, but we went from 2.0 to 2.0.1 to two separate hotfixes. If it hadn't been for the fact that my 3.0 feature schedule got pushed off to a different milestone, I wouldn't have gotten around to this at all. I'd been monitoring flexcomponents, but not flexcoders due to volume of emails and it was only because I had enough time to get back on flexcoders that I saw this trend. I had figured folks had figured this stuff out because we shipped the source code, but apparently it was too deeply buried, and I don't remember these kinds of questions on flexcomponents. So I started a blog and blogged. DevNet takes too much time so I think I'll just keep updating the blog unless you think I should really formalize this stuff. Anyway, sorry it was late in coming. As for your question on validateNow/validateProperties. The answer is to never call them unless you need to. Because DGItemRenderer is derived from UITextField is has different behavior than UIComponent, so validateProperties is the equivalent of commitProperties. And validateNow() forces validation in the same frame which is expensive unless you are the leaf node of the display tree. Thanks for using Flex and helping out on the lists. We really appreciate your time and feedback. -Alex Posted by: Ben | April 2, 2007 11:28 AM Hi alex, thanks for these examples. In regards to your checkbox in the datagrid header, I really wish you took this to the next level. A very common use case is to allow a user to select all rows via checkbox (as you've shown), and to allow individual selection of rows. Once selected, allow the user to delete the selected rows. I've been trying to solve this for too long, and would greatly appreciate a write up, and demo. Thanks ------------ Jeff, for spam control, your comments aren't posted until I "approve" them and I was out all day. Anyway, your scenario is interesting, not sure how soon i can get to it. You might post it on FlexCoders and see if someone else can figure it out sooner. In general, the principle should merge the checkbox list example with the checkbox header example. The checkbox list example shows that you can/should manage the selected state as a field in the dataprovider. -Alex Posted by: jeff | April 3, 2007 2:35 PM Hi Alex, thanks for the examples, can't have enough of them! I found a small bug that I thought was worth reporting because I think people eventually want to use this feature of the DataGrid as well. So, in the "Split Columns/Grouped Columns" example, when you click the header to sort the column it throws an error, something about not being able to find the 'history' column or property.. Cheers -------- Thanks for the heads up. I'll take a look at if I get the chance. I expect that you can steal the code from CheckBoxHeader that turns off the sorting. Posted by: Thijs Triemstra | April 4, 2007 2:22 AM Jeff, I wrote a post about that scenario a while back at. It doesn't include code to delete the items, and uses a different approach than the one Alex demonstrated here, but it should get you pretty close to your goal. To delete the items you'd just want to iterate over your dataProvider and remove any that had the property your checkbox sets. Alex, I would be interested to get your opinion on the approach I took. It varies significantly from your approach, but still seems valid because itemRenderers seem to be the main purpose of using ClassFactory. Am I wrong on this? Should the strategy I used be avoided? Thanks, Ben ------------ Hmm... I looked at your example quickly so I might have missed something.. ClassFactory is used in many places like Charts and some of our other components. It provides another level of indirection over specifying a class to use as a "plug-in" of sorts. Any time I want to abstract the viewer from the thing data views, I will likely use IFactory, but you don't have to use ClassFactory, sometimes you'll want to make your own IFactory implementation. It looks like you tried to do that in your HeaderRenderer, but I think it isn't getting used if you assign it via ClassFactory. I'd think you could remove implement= and newInstance() and things like that. However, the basic approach is fine. I think mine is better mainly because I wrote it, but also because custom headers allows strong typing of the extra properties that you need. It's pretty easy to have a typo in a {} object declaration. Second, a custom column would require one less external variable to retain the selected state of the checkbox. To be picky, I still wouldn't use an HBox as the outer, it is pretty heavy, and I wouldn't use binding, it is also heavy. I'd just listen to the appropriate events and set properties as necessary. I think your bindings may cause memory leaks if you end up trying to dump the renderers from memory. I don't think I saw an unwatch() call. -Alex Posted by: Ben | April 4, 2007 9:49 AM Hi Alex, sorry for bombarding your comments, but I just went back and looked at the docs. They show/recommend overriding the data setter and/or using ClassFactory as myself and others have been doing, so I'm a bit confused about why you recommend a different approach. Are these simply your preferred way of implementing renderers or does your approach come with advantages? I'm also not sure how your approach could be applied to more stylized renderers like a horizontally centered CheckBox, or a renderer that contains a small animation. I am really just trying to understand these topics as thoroughly as possible because my current and future Flex work uses renderers very extensively. Thanks, Ben ------------- In the software industry, I'm almost an old fogey. I've been around since DOS and floppy disks, 640Kb and assembly code. Because of that, I'm a code minimalist, and am wiling to take a few extra steps to make things small and fast. I'm also one of the people on call when a customer has performance or size problems. So given that, my design bias is toward slightly more sophisticated things that don't follow the recipe nicely but come out optimized for one thing or another, and often don't optimze for development time. That puts us in a bind. As a new platform we want it to be really easy and fast to develop code and learn how to customize things in Flex. If we make the docs thicker with stuff like my item renderer post, we risk folks saying that Flex is too hard to learn, that there are too many patterns, etc. If we had the time, there'd be an advanced guide, an "Inside Flex" book like "Inside Windows" etc. Good folks like Colin Moock and Chafic Kazoun and others are cranking out great books like this as we speak, but they are also targeting the masses which is what we need to get a beachhead as a platform. So the best I can do is blog on occasion. The examples in LiveDocs are perfectly fine and were reviewed by me before publishing. They just aren't optimized because they are meant to solve a problem in a cleanly defined pattern that can be repeated in other scenarios. And for most folks, it will take them 10 minutes to center an image by putting it in a Canvas or HBox and 10 hours to figure out how to subclass Image to do the same. That 10 minute solution is critical to proving to their managers that Flex is great, and my hope was, by the time they really need to optimize, it will only take 2 hours since they've learned the other 8 hours of stuff along the way. Unfortunately, my recent stint on FlexCoders has caused some worry that there's still a big gap for a lot of people between doing things in MXML and doing things in AS. So I tried to close that gap with my blog posts, yet I can see there's still more that is needed. I hope I can find more time to put up more samples. Because I've got 20+ years of experience and 15+ of object oriented programming, whenever Flex doesn't do something I want, I always find the right base class and start subclassing. Others want to use MXML which is fine if that's your comfort zone, but if you need speed or smaller size, many times you gotta go lower-level. That's hard for many and I hope some of these examples can help out. Thanks for being a helpful and active member of our community. We really appreciate the support and feedback. -Alex Posted by: Ben | April 4, 2007 12:16 PM Thanks for the detailed responses, Alex. I completely agree that getting people productive quickly is paramount, and seems to be one of the things driving the buzz around Flex. Having that context, I see why articles like these wouldn't have been appropriate at launch time. I look forward to learning more about optimization and the finer details of how the framework works. Thanks again, Ben Posted by: Ben | April 5, 2007 6:28 AM Hi Alex.. Thanks a lot for ur examples. I tried out the color change in cells after editing. but am facing a problem that, whenever i use the horizontal scroll bar to edit next column, the colors get messed up. Can u please give a solution for this? ------------ Vyshak, Well, I'm not quite sure what your scenario is as I don't think I know what you mean by "horizontal scroll bar to edit next column". So you can provide more details if you wish. However, I must make clear that these examples are "unsupported". I'll try to fix things if I have time, but my main job is to develop new features for Flex 3.0. These examples were intended to serve as a starting point for your own solutions. Hopefully you can find the problem yourself, or post your example to FlexCoders and maybe others can help. -Alex Posted by: Vyshak | April 9, 2007 6:48 AM Hi Alex, Thanks for ur reply apart from your busy schedule. I made it and now its working fine. Thanks a million again for providing such a wonderful article. it helped me a lot.. Posted by: Vyshak | April 10, 2007 1:53 AM Super helpful blog! Thank you thank you. Can you think of a way to make the CheckBoxHeaderRenderer center horizontally? ------------- Glad we could help. You should be able to use or borrow from the CenteredWidgets example. Posted by: Lisa Twede | April 11, 2007 10:53 AM Alex, Good to see you blogging. Indeed, very interesting post. Not many people know about different patterns (algorithms/design) used in Flex framework. It really helps to come up with best-practices, when you (an Architect of Flex framework) suggests recommended ways and also explains things (design, decisions etc). Thanks for taking time out and making such posts.. regards -abdul Posted by: Abdul Qabiz | April 12, 2007 11:10 PM Alex, I am trying to create a datagrid that will change the column headers dynamically throughout my application. I know how to add rows dynamically but how do you create columns dynamically?? Thanks for taking the time to help us who are still learning this great language. Jon ----------- Columns are different. Rows are represented by collections like ArrayCollection, but the set of columns is just a plain array and not a collection. The rules for many array properties in Flex and Flash is to get a copy of the array, change it and set it again. If you just change members of the array, the component probably won't see it as there is no way to watch for changes to array members (well, there is, but it is too much overhead). So the pattern looks something like this: // get the current array of columns var cols:Array = dg.columns; // create new column var col:DataGridColumn = new DataGridColumn(); col.dataField = ... col.headerText = ... .... // add to end, can use splice or something else cols.push(col); // shove the whole modifed array into the datagrid dg.columns = cols; Posted by: Jonathan Marecki | April 19, 2007 2:14 PM Hi alex i've decided to use your list of checkboxes demo to create something similar for a form im using which enables the user to select a list of courses that they can enrol on to. i am using amfphp to retrieve the data into an array collection. so far i've managed to get the list of courses to show but how do i bind the checkboxes to the data in the array collection? ----------------- Normally, you set selectedField to the name of the property that represents the checkbox Posted by: anthony | April 22, 2007 6:55 AM Thanks for contributing this great blog for those of us who are new, these kinds of advanced Flex tips and suggestions are exactly what many of us need. I have a question about your design decisions in BlinkWhenChangedRenderer: early in the blog post you mention that ItemRenderers may be recycled across different rows -- but you are saving row-specific state in the BlinkWhenChangedRenderer. Isn't it possible for #validateNow() to be invoked on a BlinkWhenChangedRenderer for a row when that renderer's "lastUID" ivar is set to the lastUID of a different row? ---------------- Yes, it is possible so the code doesn't blink in that case (of course there could be a bug Posted by: Erik | April 24, 2007 2:06 PM Thanks for the information that you've been posting -- it's really helped me a lot. I do have a problem though with the "Text Color and Styles in Item Renderers" example and I'm wondering if you might know how to solve it. I need to change the text style in one column whenever an item renderer in another column updates the icon that is displayed in that column. I've implemented the code for the "Text Color and Styles in Item Renderers" example but the text style only appears to be visually applied when the next "change" event occurs. For example, when I click in a row and a DataGrid "change" event is dispatched a function is called that changes a dataField value resulting in an icon being updated in a column. This works fine. However, in my text column when the ComputedStylesRenderer calls the stylesFunction and the stylesFunction returns an object with the new style based on what is being displayed in the icon column, the text in the ComputedStylesColumn does not change until another row in the DataGrid is selected. I'd like to change the style for the text in the ComputedStylesColumn at the same time that the icon column is updated so that they are in sync. Is there some way to force the text in the ComputedStylesColumn to be drawn with the new style as soon as stylesFunction returns the new style information? Thanks for any ideas! ------------------ In theory, a call to invalidateDisplayLIst should shake things up and cause a redraw. Posted by: Paul Whitelock | May 10, 2007 10:34 AM >> In theory, a call to invalidateDisplayLIst should shake things up and cause a redraw. Originally I tried calling invalidateDisplayList from a number of places (including in the ComputedStylesRenderer) but the updated text style is not displayed until a new row is selected in the DataGrid. I also tried updating the data in the cell to see if that would force the style to be applied, but that didn't work either. It looks to me like it is not possible to dynamically change the text style using the ComputedStylesRenderer technique (a different DataGrid row must be selected before the style is shown). Of course it could be that I just haven't looked under the right rock yet :-) ----------------- You might need to call validateNow on the renderer instead. We're in a busy period right now, but maybe I'll get a chance to look in a couple of weeks. If you have a simple test case, zip it, rename it so our filter doesn't catch it, and email it to me Posted by: Paul Whitelock | May 11, 2007 11:28 AM Hi Alex, Thanks for all the stuff you have been posting. Your Split Columns/Grouped Columns example is something I was looking for for a long time. I was playing around with custom item renderers and item editors editors and noticed that if I use a custom item renderer the DataGridEvent.reason is not set properly on the itemEditEnd event. Could you shed some light on that? For example I have a datagrid with 2 columns A & B. Column B has a custom renderer. If I try to handle the itemEditEnd event when the user finishes editing column A by clicking on column B then the event.reason that is set on the DataGridEvent is DataGridEventReason.OTHER instead of DataGridEventReason.NEW_COLUMN as expected. ------------------------- Maybe the .editable field isn't set right? I won't have time to look into this for a while, but you can send me a test case if you want. Posted by: Sanjucta Ghose | May 14, 2007 12:25 AM Hi Alex, thanks a lot for your great examples. I've got a question for the SplitColumn example. If you try to sort it (as Jeff has spotted) it doesn't know how to sort on "history". Your suggestion was to use the CheckBoxHeader example but that turns the sorting off completely. How would I be able to sort the rows by selecting the by "High" or "Low" child header? Any help with this would really be great! Thanks! ------------------ We're in a busy period right now so I can't actually take the time to work on this, but I would get the HEADER_RELEASE event, call preventDefault() and then sort based on the values of xmouse and ymouse. Good luck. Posted by: Lotte | May 15, 2007 12:20 PM Hi Alex, thanks for your great demo. I tried to bind a link to a column header for the datagrid. it would open a new window of the link. Do you have any idea about it? Thank you! --------------------- Should be possible. If you use LinkButton instead of CheckBox in the CheckBoxHeader and call navigateToURL, it should do what you want. Posted by: Leon | May 23, 2007 7:41 PM I have a DataGrid with editable columns. If the user enters a value that is too high, I would like the cell to behave similarly to the way other controls behave when you set the errorString property to an error message. I tried making an item renderer that extends TextInput, and it works, but when you tab out of cells the itemEditorItemEditBeginHandler function in DataGrid.as blows up because the _editedItemPosition is null. Any advice? ---------------------- The model we have for that is that you validate the input on ITEM_EDIT_END and call preventDefault() if the data isn't valid. You can additionally tell the renderer that the data is bad so it can display itself in some other way. I haven't seen the issue with _editedItemPosition being null although the latest hotfix has one additional protection against that. Feel free to post a mini-example on FlexCoders. Posted by: Lisa Twede | May 25, 2007 10:05 AM Hi Alex, thanks so much for all your posts so far. I discovered your blog earlier today and have not stopped reading since. This one is particularly interesting to me. I've been using event-bubbling to capture events from custom item renderers until now, but I'll certainly re-evaluate my approach to both dispatching events and building custom renderers after reading this. a small request ... would you consider enabling your future flash examples with the 'view-source' option enabled? If you just need to glance over the code to get the gist it's so much easier than having to d/l, unzip and load up in some text viewer. Cheers again, Neil ---------- I'll look into it. Posted by: nwebb | May 31, 2007 8:27 AM Hi Alex, great info thanks. One small thing that's puzzling me is this - when you mention event dispatching from a custom renderer you use this code: var event:Event = new ListEvent("myCustomEventName"); event.itemRenderer = this; owner.dispatchEvent(event); ListEvent doesn't appear to be dynamic or specify an "itemRenderer" property so I get a compiler error. Could you clarify what to do here? Thanks ----------------- mx.events.ListEvent does have a itemRenderer property Posted by: nk | June 4, 2007 7:37 AM Alex- Great article.. it helped out a bunch. I have a custom renderer that I'm creating with a ClassFactory and it is working great but I can't figure out how to delete everything from memory once I'm finished with it.. I removed everything from the list and set the dataProvider to null but it still lives in memory.. any ideas? Thanks, Peter ------------------------ The new beta has a memory profiler that can help you track this down. Typically though, you have to make sure listeners are removed as well as other references. Posted by: Peter | June 13, 2007 4:23 PM Thanks for the articles! I'm a newbie with flex, just wondering if you can help me. I want to get some information from a web service and use that information to create URL links, the web service returns the label and the URL. Need to open the link when user clicks on it. I was thinking of using a one column datagrid, I thought initially that this is the best way to go - but now I'm finding it very difficult to do. How would you do this?? ----------------- Probably many ways to do this. Look at the Repeater examples. You could use Text components or LinkButtons in the Repeater. DataGrid would work, but may not be the UI you want. Posted by: rod | June 25, 2007 7:56 AM In "Split Columns/Grouped Columns", how should I make multiple columns instead of just giving left and right columns? ---------------------- In the same way I split into two columns, you can certainly split into three or more. Posted by: JK | June 25, 2007 9:46 PM These examples are wonderful, the certainly address many issues that seem to be almost universal throughout the flex/as community. Thanks again, Alex. Just like JK, I am also trying to split columns into more than 2(left and right). But I was wondering how you would do it dynamically with a value determined by user or at run time? I am still very new to flex and any sort of coding example would be much appreciated. ----------------- I don't have time to put together code right now. Basically, I would change the leftColumn/rightColumn in the custom Column to be an array. The column header and renderers would go through the array and create headers and renderers for each column in the array. You would specify the column in the main app just like you do for the column set in the main datagrid. If you have questions, post to FlexCoders, there are plenty of folks there who can help. Posted by: NR | June 26, 2007 4:00 PM Hi Alex thanks for the samples. In split columns, when I try to apply itemRenderer for SplitDataGridChildColumn, it is not getting affected. Any idea to overcome will help me. Thanks. -------------------- It looks like there is a bug in SplitDataGridItemRenderer. Around line 135 or so, it is using headerRenderer instead of itemRenderer. See if that gets it to work for you. Posted by: JK | July 4, 2007 4:16 AM Hi Alex, Great stuff here, really helpful. I am trying to do a filtering combobox at the top of a column on a datagrid, just like your example, but I am having trouble using remote data in the combobox and datagrid instead of static arrays within the code. Once I call my HTTPService, I get nothing back unless I change the code from: to: in which case, I get data for the grid but not the combobox, obviously. Any suggestions? Thanks!!! ---------------- You want to assign the columns[0].dataProvider (and the dg dataprovider) when the results of the server request come back. You probably have a result="..." handler for your HTTPService. Posted by: Bob | July 17, 2007 10:25 AM Hi Again, Thanks for the tip! You were right, and now I get data in the grid -- but in the ComboBox I just get [object Object]. I added a custom labelFunction there, but my every attempt just results in [object Object] in that combobox. It seems like the ComboBoxHeaderColumn wants a labelField as well, but even if I add that var to the ComboBoxHeaderColumn function in ComboBoxHeaderColumn.as, I still get the same result. Sorry to bug you with this, but I am stumped. Can you help? Posted by: Bob | July 18, 2007 4:12 PM Hi Yet Again, Great news: I figured it out. I had to add the following: in ComboBoxHeaderColumn.as: public var comboBoxLabelField:String; in dg.mxml: dg1.columns[0].comboBoxLabelField= "myDBvalue"; and in ComboBoxHeaderRenderer.as, within the override public function set data(value:Object):void: labelField = _data.comboBoxLabelField; Now I have the labelField for the ComboBox and the dataField for the DataGridColumn in one element. Sorry for the extra posts, but I thought I ought to share my findings! This does bring to mind a question: is it possible to sort a column by clicking on the ComboBox at the top of that column, and then open the ComboBox and sort the grid by selecting a value from the ComboBox drop-down? If you know, please share. If not, I'm gonna try it anyway and see how it goes. Thanks again for the great blog. -------------- You should be able to customize sorts and filters based on the ComboBoxHeader Posted by: Bob | July 18, 2007 5:25 PM OK, sorry to bug you again, but I have hit a wall with trying to get a ComboBox to both filter and sort. I like the way the PopUpMenuButton offers the ability to fire off one event when you click the button, and drops down a menu only if you click the arrow at the right. Is there any way to modify the ComboBox (or ComboBoxHeaderRenderer in this case) to emulate this behavior? I'd love to have the ComboBoxHeaderRenderer sort its column when clicked to the left of the arrow, and drop down a menu when the arrow is clicked that has a list of items which filter the datagrid when clicked (which I am already doing with the drop-down items, I just can't separate the arrow from the rest of the button). Any suggestions you can offer would be greatly appreciated! -------------------------- You might try putting a PopUpMenuButton next to a Label or DataGridItemRenderer in the header Posted by: Bob | August 1, 2007 4:10 PM Hi Alex, Is there anything which can display the complete HTML contents in Flex along with images and all. ----------------- Not in the web player. Apollo/AIR will have an HTML component. Also google HTMLComponent as some folks use a floating IFrame over their Flex app. Posted by: Ashish Mishra | August 3, 2007 12:21 AM Hi Alex, Thank you for posting these great examples. I've implemented the Background Color Renderer and the Html Renderer and made a few modifications to suit my live data application. All is working perfectly. I just have one question, is it possible to show a hand cursor when mousing over data in the datagrid? I can achieve this with an inline mxml itemrenderer but it would be a lot more efficient if I could achieve it using a custom renderer. Thanks in Advance, Karl -------------- I'd check out buttonMode. Posted by: Karl | August 10, 2007 1:47 AM Hi Alex, Thanks for your response. Which component would you recommend to use as a renderer that has both backgroundcolor and buttonmode properties? It can be achieved by using a canvas and text but I was hoping for a more lightweight solution. Your suggestions would be much appreciated. Thanks, Karl --------------- I would use UICompnent to wrap the DataGridItemRenderer. Posted by: Karl | August 10, 2007 11:46 PM Hi, Great examples but the one with a headerrenderer with checkbox throws a RTE when clicking the header. I'm on Moxie M2, maybe thats why, since it seems to be working in your demo. //Morgan ------------------- These examples are not guaranteed in all scenarios. I'll have to remember to update them when Moxie ships Posted by: Morgan | September 4, 2007 4:18 AM Hi Alex, I spent 4 days trying to optimize the scrolling speed issue in my DG and finally you saved my life. I am now using a mix of your BackgroundColor and TextColor renderers you have posted. The display response is now amazing. Now I am trying to include a graphic property in order to be able to draw either a rectangle filled with gradient or a circle. Unfortunately I can't figure it out. UpdateDisplayList do not work with DatagridItemRenderer and as far as I understood the it reacts as a Textfield. Even if I am using CSS styles I can not implement backgroundImage or backgroundGradientColors. Would you have any tips? How could I extend that renderer to use graphics? Many thanks. Regards, Rudolf. -------------- The most lightweight think you could do is subclass Sprite, implement IListItemRenderer (and maybe IDropInListItemRenderer) and draw your graphic in there. The fastest thing to do might be to take a Label, leave the text blank and draw the graphic in the .graphics layer. Posted by: Rudolf | September 6, 2007 6:03 AM Alex, Thanks for the blog, I've found it very useful. I extended Split Column example to support multiple columns (more then two) and published code at: (I noticed several other people asked about multiple column split) ------------ THanks for helping out. Posted by: Roman | September 7, 2007 3:08 PM Hi Alex, I have a question, is it possible to hide/show left or right SplitDataGridChildColumn? When I set visible property to false of SplitDataGridChildColumn, it remains visible... Could you help me? Thanks, Pietro -------------------- Not in my example, but I'm sure it is possible to modify the example to do so. I don't have time to help you out right now. You might be better just hiding the single versions of the columns and showing them when you hide the split column Posted by: Pietro | November 8, 2007 2:12 AM hi, has noted before, the event type should be ListEvent in order to be able to set it the itemRenderer property, you should correct this in your post, thanx var event:ListEvent = new ListEvent("myCustomEventName"); ----------------------- so noted. Posted by: Benoit Jadinon | November 19, 2007 10:23 AM Dear Alex, Thank you for your article. I consider it very useful. In your Sub-Object Demo example you showed me another solution to the issue of indicating to DataGridItemRenderer which sub-object to display. My purpose is to create efficient way to render columns (create pivot table) containing information about quantity of product (each product listed in separate row) to be delivered(or not) in certain delivery (list of unique deliveries list in ascending arrival date order). The pattern for adding columns is obvious for me. It worked with container based DataGridItemRenderer that implemented “mx.controls.listClasses.IDropInListItemRenderer, mx.core.IFactory”. But as you wrote - container are heavy. In your example you set deepDataField and itemRenderer in the . In a modified version instead of hard written code for the column I have function that adds given number of columns. The problem is that “col.itemRenderer =” part raises in the Flex Builder “Implicit coercion of type DeepDataGridItemRenderer to an unrelated type mx.core:IFactory”. Is that correct? or What should be tweaked to make this addColumn function work with the your example action script code? public function addColumns(cols_no:Number):void { var cols:Array = dg1.columns; var i:int; for (i=0; i var col:DeepDataGridColumn = new DeepDataGridColumn(); col.headerText = 'D'+ i; col.dataField = "deliveries"; col.deepDataField = "deliveries.D"+i; col.itemRenderer = new DeepDataGridItemRenderer(); cols.push(col); } dg1.columns = cols; } Best regards, Greg ------------------------ MXML compiler autogenerates some code and turns: itemRenderer="DeepDataGridItemRenderer" into itemRenderer = new ClassFactory(DeepDataGridItemRenderer) The -keep option in MXML will reveal most of this code-gen. Posted by: Greg | November 20, 2007 2:20 PM I have a datagrid which has a certain number of rows. Selecting a row should bolden the text in that row.On selecting any other row, previously selected row text should get unboldened. In flex 1.5 it was easily achievable by handling this situation in setValue() but in Flex2.0 im totally confused on how to go about.Please help? ---------------- The same way I changed colors in the examples, should allow you to change to bold Posted by: Adit Shetty | November 22, 2007 2:58 AM Hi grate info on DataGrid renderes. Thanks. I wonder how I can use sorting when ther is an custom renderer in the heder of DataGrid? --------- You would display your own sort arrow and/or provide sorting ui in the renderer. For example, one of the combobox entires would be "alphabetical" Posted by: darko | November 22, 2007 2:06 PM Hi Alex, Thanks for your great examples!. I use your ComboBoxHeader with few modifications. Everything works well but my concern is about memory usage. When I run the profiler on my application it shows me multiple instances of ComboBoxHeaderRenderer and the garbage collector never remove them from memory. I try your sample and it shows me 4 instances only at initialization. Why? Is there any way to delete those instances? Even when my datagrid is removed by the popUpManager the memory is not released. Thanks -------------- Are you saying my example leaks, or just yours? Posted by: MJ | November 26, 2007 11:50 AM Hi Alex, I'm trying to use the text styles and color and it works but I have a question. If I wanted to have a separate button that modified the price of a particular item in my array, the datagrid doesn't seem to take the color that would reflect the new price. I've tried to use validateNow() on the datagrid with no luck. Do you know of a way that this can be done? Thanks. --------------- call invalidateList() on the dataGrid Posted by: shaf | December 13, 2007 11:08 AM Hi, I am new to flex and trying to understand the inner working on DataItemRenderer(s) and DataGrids. I went through the Flex Framework code. But even then I couldn't find answers to some of the questions. Like why _listData property could be null and when we have _data property not avalable but _listData property available. Basically the difference between _listData and _data is not very clear to me. Any pointers would be of great help ------------------ data is the item in the dataProvider that the renderer should display. It can be null if there is no data or there are null items in the dataprovider. listData is a data structure that describes more information about the data and the list class such as its UID, what itemToLabel returned and for DataGrids infomrmation about the column that the renderer is in. Posted by: Nikhil | January 7, 2008 10:45 PM Hi, Just getting into itemRenderers with Flex (v3 beta 3) and finding some odd behavior with a scrolling tile grid that has an itemRenderer set, though it does make sense within the context of the discussion of recycling. My renderer has a text field and an image. When I scroll the TileList and then scroll back to the previous position, the items seem to be overlapped - the text gets all munged together from what was there previously. Not doing anything special in my renderer, do I need to be? Thanks ---------------------- Sounds like you might be making new children in the renderer instead of reusing old ones or destroying the old ones? This is a better discussion for FlexCoders. Posted by: Evan | January 17, 2008 2:08 PM hi Alex, Thanks for your blink example. It helped us a bunch on one of our projects. I have a question though. I have been able to use the blink on a particular cell coz we are using Item Renderer for a cell here, but can we do the same for an entire row. I want the entire row to blink and not just one cell. Please let me know if this is achieveable. If yes, then how do we do it. Thanks in advance ! Cheers !! ------------ You could blink all cells on a row, or you can add make a more complex renderer that can draw outside its boundaries. It might be easier just to blink all cells in a row. Posted by: Sathya | January 29, 2008 9:20 AM Hi, Thanks for the great post. I am worling on a Grid in which I have to provide the custom sorting. I am able to do that by using header_release event and by calling the event.preventDefault(). Now the issue is since I am using the event.preventDefault(), placeSortArrow() method of DataGrid is not being called and I can not call it since this is a private method. So I am not seeing any sortArrow in my columns. Any pointers regarding this will be of great help. -------------- I would use a custom header renderer that had its own way of displaying the sort. The DataGrid will show the sort arrow if it sees a single SortField in the sort so your sort is more complex and may need a different UI anyway Posted by: Nikhil | January 30, 2008 2:56 AM Let's say you have a function in your main application file you want to reference. How do you reference it from within your itemRenderer component. Everything I try results in a 1069 error. ReferenceError: Error #1069: Property myFunction not found on myApplication and there is no default value. ------------------- Depends on how you wrote it. If it is an inline renderer (with mx:Component) then use outerDocument.myFunction(). If it is in a separate MXML file, use parentDocument.myFunction(), and if it is an AS file, use owner.document.myFunction(). Posted by: judah | February 2, 2008 11:09 PM In my last question I asked how to access a function defined in the main application from within my itemRenderer component. Well, it turns out that unless the function (in the main application) is public it will not find it, thus creating the error. Here is the item renderer: (mx:HBox xmlns:mx="" horizontalAlign="center") (mx:Script) (![CDATA[ import mx.controls.DataGrid; private function handleClick(event:Event):void { DataGrid(owner).parentApplication['goToEditAlarm'](data); } ]]) (/mx:Script) (mx:Button label="Edit" click="{handleClick(event)}"/) (/mx:HBox) Posted by: judah | February 2, 2008 11:28 PM Hi, I'm a newbie in flex. How can you make a linkbutton functioning like a Vscrollbar. I mean two linkbuttons. 1 for scrolling up and 1 for scrolling down and vice versa. And when I click and hold the up linkbutton it must scroll continuously. Because as I'm doing a work around for this i cannot do the scroll when i click and hold the linkbutton. you must click it again and again to continue scroll. thanks in advance. ---------------------- set autoRepeat=true on the LinkButton and listen for buttonDown instead of click. Posted by: alex | February 5, 2008 4:10 AM Hi Alex Great information. Does the datagrid allow you to simply enter data starting with a empty grid and then dropping down to next row after entering data on the line above and continuing entering rows of data? --------------- You can do that, but it isn't built in. Check FlexCoders or search the internet for an example Posted by: Larry | February 5, 2008 12:36 PM Hi, Alex, thanks for all your help. I have one more question regarding the performance of DataGridItemRenderer. I have a datagrid with 6 columns and one of the column ( column number 5) can have a value as "Unallocated" . When this value is "Unallocated" I have to change it to "Unallocated [u]Allocate To Team[/u]" This "Allocate To Team" should be be a link and the whole text "Unallocated Allocate To Team" should be part of a single cell. To solve this issue, I wrote a custom renderer which extends from HBOx and has two label fields - one label for 'Unallocated' text and the other one for 'Allocate To Team' link text. ( Users should be able to click on 'Allocate To Team' but not on 'Unallocated' text). A pop-up will appear in my application when the users will click on 'Allocate To Team' link. This works fine but is very-very poor in performance. So I was thinking about extending from DataGridItemRenderer as you have mentioned in your previous posts. So I am setting the htmltext property using the following code -- htmlText = cellValue + " Allocate To Team"; I can see this text as link in the DataGrid cell. But when I click on this link, nothing happens. Any pointers regarding this will be of great help. ------------------- You probably have to set selectable=true on the subclass Posted by: Nikhil | February 7, 2008 3:13 AM Hi, Thanks for the tip. setting 'selectable = true' worked for me. Thanks, I have one more question regarding rendering. let's say in the DataGrid, I have two columns EmployeeName and Department. Now let's assume EmployeeName is 'Nikhil' and 'Nikhil' is in two departments 'Department1' and 'Department2'. I have this requirement that if in a grid we have two rows with similar values, for example row 1 has 'Nikhil','Department1' and row 2 has 'Nikhil', 'Department2'. Now if these two rows comes in the grid one after the other, then in the second row we should not display the employee-name and should just display the department. Basically we are trying to group the data on employee names. So the data should be displayed like Nikhil,Department1 ------,Department2 Rikhil,Department1 But if these two rows doesn't come one after the other, if the users does a sort on say Department then these two rows might not come one after the other. If this happens then I have to display the complete information.The data could be like this. Nikhil,Department1 Rikhil,Department1 Nikhil,Department2 So if I am rendering some row, I should know the data in the row above this row. But as you mentioned in your posts that the rendering happens in the random order. Is there a way to solve the above mentioned issue. To force rendering in a pre-defined order or in some other way. ------------- Yes, if your data is in an Array. It'll work for XML as well, but it is a bit slower. You need to find a way to compute the index of the item the renderer is displaying. One way is to use the listData.rowIndex + owner.verticalScrollPosition. Then get the previous item from the collection and check its values. Posted by: Nikhil | February 13, 2008 9:09 PM Alex, Thanks for the amazing blog! I had a question about checkbox itemrenderers. In my application, I basically want to use checkboxes as row selection indicators - they're not fed by data. What is the best way to handle this? Some options I see are - a)Trigger row selected on a checkbox select. This way the checkbox selection is always tied into the grid's selectedItems. b)Store row indices on checkbox select. and write a method, say getCheckedItems to return this array of indices. Which route would you recommend or are there better alternatives? Thanks! ------------- I haven't cooked up an example for that, but it is a common request. Google around and see if you can find a working example. Last time I thought about it, I would change the selectedIndices of the List as the checkboxes got checked. Posted by: Anita | February 14, 2008 10:51 AM Hi Alex, I need to how to show a hand cursor when mousing over data in the datagrid.Help me to find out solution for that. --------------- useHandCursor=true Posted by: jaguirkhan | February 16, 2008 3:09 AM Hi Alex, first off all I have to Thank you gays for the solution on this itemrendere, Flex 1,5 was a pain in the a..! I Have a question for you, I would like to subclass the datagrid to make it dispatch when the user clikc's on a row that have no data at all, exsample I have a datagrid that have a dp.lengt == 4. and the rowCoundt == 10, I want to know if the user clikc's on the six last rows. Can you point me the rigth direction ? By the way I realy admire your work :) Cato --------------------- Alex responds: I think you'll get mouseDown events in the dead area and if mouseEventToItemRenderer is null and event.target is ListBaseContentHolder you know you're in the dead area. Posted by: Cato | February 26, 2008 12:41 AM Hi, Alex thanks for your help. I am able to solve two issues in my grid with your help. This time I have a question about itemEditors ( I am not sure who else to ask this question). In my page, I have two grids top and bottom grid. Now If the user clicks on any row in the top grid, I have to display the corresponding items in the bottom grid. One of the columns in the top grid is editable. Now the problem is if the user clicks on this column, the item editor should open but that doesn't happen and the user has to click one more time on the same cell to open item editor. I tried to debug the application and found out the following issue. As soon as the user clicks on the editable cell, the item editor opens. But since I rebuild the bottom grid as soon as the selection is changed in the top grid. So probably the focus moves from the top grid to the bottom grid and the item editor instance is destroyed. If the user again clicks on the same cell, the selection is not changed and the bottom grid is not re-created. So the foucs doesn't move and we can see the itemeditor. The user wants to see the itemEditor in a single click and not on two clicks. Is there a way to do this? -------------- Alex responds: Unless you are calling setFocus on the bottom grid, focus should remain on the top grid that got clicked on. I would try to find out why focus is changing. Or is it because focus was already in the bottom grid and it tried to retain the focus? In theory, there should be enough FocusEvents being dispatched to determine that you've lost focus to the other grid and a call to setFocus on the topgrid and maybe setting editedItemPosition might fix it. Posted by: Nikhil | February 26, 2008 1:18 AM Hi Alex: Thanks so much for posting this invaluable information. Like others, I am also battling performance problems with a large datagrid. My question is: How can I extend the MultiLineHTML example to insert an html link into the text cell (i.e: adobe), then traverse to it when selected? Thanks, Tr -------------------- Alex responds: In theory, you should be able to add that to text cell and it should just work. If you have problems, try it with a default cell and a single-line link, then post to FlexCoders if you have problems Posted by: Tony | March 6, 2008 8:09 AM Hi Alex Thanks for the article, since we are scratching our head, why the application is taking so much of memory some times event the computer get crashed , or its showing the system a lot. Now I figured out that because of the MXML based Itemrendere we are using causes this crash. The application we are developing has a custom tilelist component, which loads a mxml heavy itemrenderer which is used to display the powerpoint slides(ranges from 60 to 120) jpg images, there are rules to be applied so lots of drawing api code is used to draw lines, and we are selecting the itemrenderers in different combinations like if u select one item and if it belongs to a family of bundles say 8 all are getting selected, for highlighting the selection we will be drawing 8 yellow rectangles, and other 2 type of complicated bundle rules. As per our requirement we increase the size of the tilelist component, so that it does not unload the items which is not visible, we are manipulating the parent of the tilelist the canvas to display scrollbar for scrolling the tilelist component itself. So say if the tilelist has 50 items , all the 50 items are always in memory. When I used profile to see whats happening, each itemrenderers, has more then 200 items it. , initially the application was developed using flex 2.0, then 3 days after reading some blogs we ported to flex 3.0 to see any improvement but nothing much. I don’t know why adobe did not handled this kind of a big problem, its not removing the old items, don’t know from where its get created actually in design time the itemrender has 8 image component to display 7 thumb nail images according to the rules set for each itemrender in runtime, 3 canvas , 4 hbox and 1 vbox and 3 spacers for alignments. I taken ur advice to convert it into actionscript class, I like to have ur advice so that, whenever each itemrenderer is loaded into the tilelist component the Page File usage of the windows will not rice my 10 md which never reduce and on each drag and drop the Page File usage of the windows should not increase 15 to 20 mb,even on every action CPU usage should not touch 100%. Please suggest me how to go about creating custom actionscript based Itemrenderer which does not take more memory. Thanks Thiru --------------------- Alex Responds: The Flex 3 Profiler should help you see where your memory is going. It sounds like you're loading a lot of bitmaps, and will be expensive. Fetching bitmaps that are higher resolution than the screen is wasteful. If you can generate lower resolution bitmaps on the server, that might be a good thing to do. Our friends at Scene7 have made a business out of it. Posted by: thiru | March 11, 2008 9:11 AM I was having trouble getting my custom itemRenderers to work on the children (I'm operating off the version from widget-labs which supports an X number of children). Basically, when it assigned the data to the existing child renderer, it was using the parents data field instead of the childs. I changed this: childRenderer.data = data[column.dataField]; To this: childRenderer.data = data[childColumn.dataField]; And that seemed to fix it. Posted by: Adric | March 20, 2008 12:41 PM Hi, I have a datagrid in which I have to display some text. This text will be in some cells and the other cells will be blank. Now this text is long like "planned milestone achieved". The requirement is to keep all the cells of fixed size and this text should overlap over the adjacant right cells. Let's say that cell of column number X and row number Y is represented as cell(X,Y) The requiremnt is that the text in cell (1,1) (Ist column of 1st row) can extend on cell(2,1) and cell(3,1)...The cell(1,1) should not expand. Is this possible? ----------------------- Alex responds: Possible, but tricky. You should see if the AdvancedDataGrid has the kind of columnspan support you want. If it doesn't, the fact is that all cells are children of the same parent and you can change their z-order and width after the DataGrid lays out the cells. Posted by: Nikhil | March 24, 2008 6:08 AM hi Alex and everybody thank you all of you because everything post here.I've just finished my project 'Smart Stock Client' using Flex and item renderer of Alex.There is a link to my website check and feedback to me at tungh2@gmail.com.Thanks again. Posted by: TungH2 | April 1, 2008 7:11 PM Hey Alex, You responded to a post ( Posted by: Nikhil | January 30, 2008 02:56 AM ), question being how to target a function in the main application from the itemRenderer. I have a similar situation, but I need to target a function in the component itself ( the component is a tree component ), from the itemRenderer, which is an AS file. I can't seem to target functions from the component I'm in. Do I have to go all the way out to the main application and back in through a reference to the component, or is there a direct way I can see stuff in that component? -------------------------- Alex responds: After commitProperties, the owner property should be referencing the tree Posted by: Brixel | April 10, 2008 10:13 AM Alex, thanks for all the postings here. I have a question regarding scrolling. I used a custome list item renderer that has variable rowHeights. When I scroll up and down the list, data displayed is messed up as I can see old data value still overlapping with the new data. How can I solve this problem? Thanks for your help. --------------------- Alex responds: That still sounds like you're having recycling issues. Make sure that the component fully validates in one pass. If you're loading external images, that will be a problem since the load won't finish within the validation pass. Some folks use SuperImage from quietlyscheming.com to get around that, but there's lots of other ways you can end up with a two-pass validation so I'd check for that first. Adding traces to updateComplete will tell you how often they get called. Posted by: greenstone | April 30, 2008 8:18 AM Hey alex, You wont believe how much frustrating itemrenderers have been for me lately. But your post clarified all the questions/funny situations I have been through. I found your post a lil late but anyways its still not that late.. Your writeup is simple excellent and will tune to it more often now. The next thing I am going to do is to find out other adobe's flex engineer's blogs. No one can beat those details man!!! Thanks once again man! Posted by: Sunil | June 7, 2008 3:46 AM Hi, I need to add a combo box to one of the column in Advanced Data Grid & populate it with some data from a data source. When an item is selected from the combo box I need to send a HTTP request, but using a value in another column of the same advanced data grid. I have used a itemRenderer and added a hbox & combo box in it. But I am not able to read the value of another column. Can you please help me and let me know how would I read the value in another column? Thanks in advance. --------------- Alex responds: Please ask on FlexCoders Yahoo Group Posted by: RBKB | June 30, 2008 6:08 AM hey? how can i have two link button in single column throuh item render? please help me --------------- Alex responds: You might be better off asking on FlexCoders. Posted by: Umer | July 23, 2008 8:47 PM Alex, RE: Sub-Object Item Renderers Can you please let us know how to extend ComboBox/CheckBox editors if the objects are nested ? Thank you prahari ---------------------- Alex responds: Probably have to subclass and override the data setters and maybe commitProperties Posted by: prahari | July 25, 2008 4:27 PM Hi Alex, Thanks for the examples! Combo Box Header Renderer works well, but there is something fishy happening behind the scenes - I have noticed that ComboBoxHeaderRenderer class gets created twice at startup and than, as you use the combo, new instances are being created... in a little while I ended up with over 30 instances of ComboBoxHeaderRenderer in your example (running Profiler in Flex Builder 3). Garbage Collection doesn't get rid of them. Why is the DataGrid not reusing a single instance? I noticed this when I put a listener in the header... I can work around that, but it is bit scary to have so many header instances flying around... Take care! ---------------------------- Alex responds: Header renderers get recycled just like other renderers. Probably more often than necessary though. If they are not getting GC'd then there's a bug somewhere that needs fixing. The profiler will show who is holding onto those renderers. Posted by: Stepan | July 29, 2008 5:47 PM Hi Alex, Thanks for the examples! Thanks for the examples! Thanks for the examples! :) one question, in my case... i have a String "123,3$ {^P}" like data in datacolumn and the String must be renderizer like a text = 123,3$ + image = ^P is up green arrow. ¿is it posible? In my case i was trying do the next: public class LabelImageRenderer extends DataGridItemRenderer ... but DataGridItemRenderer has only text or htmlText and i need aggregate a image. how can i extend DataGridItemRenderer to add image? i was trying create a component DataGridItemRenderer adding but this adds can not be watching in the item render. ¿any idea? have i to change my idea to solve? Posted by: donkelito | August 19, 2008 5:14 PM Alex ... Many thanks. I'm using your datagrid technique to render a master/detail view by means of a list with a custom renderer which is itself inline-rendered in a column, e.g.: The list contains dates, 1..n dates per datatable row. All goes well until it's time to set the depth of the visible list. The renderer, below, successfully receives the list depth from the custom tag, and sets it on its parent, however the visible result is that the first row's list depth is carried over to all other lists in all other rows. I'm trying to stay stateless per your suggestion, but must be missing something. Can you offer any guidance, or alternative approaches? The relevant section of the list renderer is: override public function validateProperties():void { if (!listData) { return super.validateProperties(); } var ukDisplayDateFmt:DateFormatter=new DateFormatter(); ukDisplayDateFmt.formatString="DD/MM/YYYY"; // Deal with row count in a stateless way ukDisplayDateFmt.formatString="DD/MM/YYYY"; var listControl:Object = listData.owner as ListBase; var targetField:String=listControl.targetField; listData.label=ukDisplayDateFmt.format(data[targetField]); var n:int = listControl.nOffers; if (n==0){ listControl.rowCount=1; // ensure space if somehow depth fails } else { listControl.rowCount=n; } return super.validateProperties(); } ------------- Alex responds: Sorry, I'm not sure I understand what you mean by depth. Try posting on FlexCoders. You'll get more folks to help there. Posted by: Robert Patt-Corner | August 27, 2008 7:08 AM And the answer to the variable depth of the list problem is (duh) that the surrounding datatable must be set with: variableRowHeight="true" 4 hours to find, 4 seconds to fix. Sigh. Posted by: Robert Patt-Corner | August 27, 2008 7:52 AM Hi, I am new to flex and facing some problem. I have an Advanceddatagrid that act as an itemrenderer that within another datagrid upto 3 levels. ADG1 -> ADG2 -> ADG3 But the data displayed is not easily readable. The Problem is Whenever I expand the innergrid I need to change the size of the outergrid row also, So that scrollbar doesn’t appear in the innergrid. I am unable to figure out how to change the size of the outer grid whenever I expand or collapse the inner grid. Please Help. ------------------ Alex responds: I'd re-think your UI. Ask ADG questions on Sameer Bhatt's blog Posted by: Terry | September 4, 2008 4:15 AM Hi Alex, I have a small bit of a problem, we are writing a application which has few datagrid (approx 10+) splattered across the application and I was thinking if there a way to set up teh generic renderer which can be used for all these DGs, example i want all the datagrids to show the negative number in RED, however i feel writing the same renderer might not be worth it. is it possible ? ------------------------ Alex responds: Not sure I understand. Renderers can certainly be re-used in multiple DGs. Most of the examples in the blog implement IDropInListItemRenderer and can be reused Posted by: vivek | September 30, 2008 9:36 PM Hi Alex, I am new to Flex. I am having 3 columns with textinput item renderer in 2 columns and Combobox in the 3rd column. In the itemEditEnd handler, I have called event.preventDefault() function to prevent the updation of dataprovider. I want this dataprovider to be updated with the new values, on a Button Click. I am unable to figure out how to achieve this. Please help. ---------------------- Alex responds: Try asking on FlexCoders. You'll get better response that way. Posted by: Tarun | October 2, 2008 7:00 AM Alex, Great blog! Thank you for the excellent examples and explanations. I used your textStyles example to apply a custom style function to a data grid (since I only have the standard version of FlexBuilder, thus no AdvancedDataGrid). In my case, I wanted to set all of the text in one specific row of the grid to gold, but wanted all other rows to behave according to the css (i.e., white text that turns gray when moused over, orange when selected). I was able to adapt your computeStyles function to set the text gold or white according to the data value... but that was overriding the css specs for mouseOver and selectedItem styles. A slight change fixed the problem -- rather than having computeStyles always return a textFormat, it returns null for all rows except my special gold row; I also added a test in ComputedStylesRenderer.getTextStyles, to simply return super.getTextStyles() when styleFunction returns null. Now my datagrid behaves exactly as I wanted it to. Thanks again. Posted by: Jay Wood | October 9, 2008 3:45 PM
http://blogs.adobe.com/aharui/2007/03/thinking_about_item_renderers_1.html
crawl-002
refinedweb
11,204
71.55
hi I'm doing image processing in java.In tht i want to do the glass effect.I got one calculations frm the net.It's working fine with the image of size 128X128 but i want to work for 120X160.i tried with tht calculation but it's pretty difficult for me to find exactly the flow.Here is tht code import java.awt.*; import java.applet.*; import java.awt.image.MemoryImageSource; import java.awt.image.PixelGrabber; public class glass extends Applet implements Runnable { // Only in class array. static public int[] PCos; // Pre-calculated Degree cos table. static public int[] PSin; // Pre-calculated Degree sin table. // private members. private Thread lineThread; // This thread. private Image img = null; // Destination image. private Image Original; // Original image. private MemoryImageSource source; // Use to convert rawbuffer to img. private PixelGrabber pg; // Pix grabber to access pixels. private int width, height; // Our dimensions. private int[] pixeldata; // Work buffer. private int[] src_data; // Original pixel buffer. private int[] glass_buffer; // Buffer for glass distance. private int[] glass_shape = {0,0,0,1,1,1,2,2,2,3,3,4,4,5,5,6,7,8,9,10,12,14,16,18,20,22,24,26,28,30}; private int xpos,ypos,dx,dy; // Glass position, and direction. public void init() { Original = getImage (getDocumentBase(),getParameter ("image")); // Get image named 'image' in html source. width = getSize().width; // Get our display width. height= getSize().height; // Get our display height. pixeldata=new int [width*height]; // Work buffer is [w*h] int. src_data=new int [width*height]; // Source image buffer is [w*h] int. PSin = new int [360]; // Create 360 index array for precalc math. PCos = new int [360]; // Create 360 index array for precalc math. glass_buffer = new int [3600]; // Create glass buffer. setBackground (Color.black); // Turn bg to black. source = new MemoryImageSource(width, height, pixeldata, 0, width); // Make source = gfx from a raw buffer. source.setAnimated(true); // Tell it's animated :-) pg = new PixelGrabber(Original, 0, 0, 128, 128, src_data, 0, width); // Get pixbuffer from source image. try { pg.grabPixels(); } catch (InterruptedException e) { System.err.println("interrupted waiting for pixels!"); return; } img = createImage(source); // Create image from source. for (int i=0;i<360;i++){ PCos [i] = (int)(256*Math.cos(Math.toRadians(i))); // Fill our pre-calc cosine table:8 bit fixed math ! PSin [i] = (int)(256*Math.sin(Math.toRadians(i))); // Fill our pre-calc cosine table:8 bit fixed math ! } for (int i=0;i<3600;i++) { glass_buffer [i]=255; /* Fill buffer with 0xff */ } for (int i=0;i<30;i++) { for (int o=0;o<360;o++) { int x1= 30 + ((i * PCos [o])>>8); int y1= 30 + ((i * PSin [o])>>8); int x2= 30 + ((glass_shape[i] * PCos [o])>>8); int y2= 30 + ((glass_shape[i] * PSin [o])>>8); glass_buffer [(y1*60)+x1]=y2*128+x2; } } xpos = 25; ypos = 0; dx = 1; dy = 2; } public void start() { if(lineThread==null) { // Does lineThread already exists ? lineThread = new Thread(this); // Create new Thread. lineThread.setPriority(lineThread.MAX_PRIORITY); // Set highest propriety. lineThread.start(); // start him ! } } public void stop() { if(lineThread!=null) { // Does lineThread already exists ? //lineThread.stop(); // Kill him (deprecated method) lineThread = null; // Dereference him. } } public void paint(Graphics g) { source.newPixels(0,0,width,height); // Our buffer as changed, tell it :-) g.drawImage(img,0,0,this); // Redraw destination image. } public void update (Graphics g) { paint (g); // Call paint on Update. } public void run() { // Thread main function. while(true) { // Infinite loop. ApplyGlass (xpos,ypos); xpos += dx; ypos += dy; if ((xpos>127-60) || (xpos<1)) dx *= -1; // invert x direction. if ((ypos>127-60) || (ypos<1)) dy *= -1; // invert y direction. repaint(); // Calls paint. try { lineThread.sleep(20); // Free a few cpu cycles :-). } catch(InterruptedException e) {} } } private void ApplyGlass (int x, int y) { int deph; int i,o; int col; int ptr =0; int source = ((y<<7)+x); for (i=0;i<128*128;i++) pixeldata[i]=src_data[i]; for (i=0;i<59;i++) { for (o=0;o<60;o++) { deph =glass_buffer [ptr++]; if (deph != 255) { col = src_data [source+deph]; pixeldata [(x+o)+((y+i)<<7)] = col; } } } } } can anyone tell me how to change the calculation frm 128X128 to 120X160. I'm struggling with this code for last 2 days please help me thanx a lot This seems to be a question about an algorithm not a Java programming problem. If you'll explain the algorithm that you want to implement in Java, we'll be glad to help. hi friend, i'm doing image processing in j2me.In tht i want to do a Magnifying lens effect for images.For tht i got the above mentioned calculations frm the net.It's working fine with the 128X128images.But with 120X160 r someother images it's not working fine. I want to draw a circle first n after selecting this Lens effect from the menu i want the Magnifying lens effect on the image for the size of the circle.The above mentioned calculation is somewhat difficult understand why this value is used here like tht. If u have any idea regarding this let me know thanx a lot n thanx for ur reply Can you define the Magnifying lens effect in english? I see a lot of Math such as Cos and sin. What is the use of the value 30? int x1= 30 + ((i * PCos [o])>>8); Why PCos() >>8? Why xpos>127-60 ? A lot of undocumented code. Without the algorithm definition, no idea on how/why this code works or how to change it. Good luck. hi friend, Magnifying lens effect means Imagine if ur seeing ur hands thru a lens then how ur hands look like?I want to do tht effect in java.for tht v have select a circular portion n ve to enlarge tht portion to look like tht.For tht i got tht codings from the net only for ur convience i attachec here an image after this effect. thanks a lot An algorithm is a list of steps to solve a problem. You're providing a high level description of the problem. Not a way to solve it. You need to apply some brain power and list the steps needed to do what you want. These steps will be low level and will include definitions for the questions I asked before: What is the use of the value 30? int x1= 30 + ((i * PCos [o])>>8); Why PCos() >>8? Why xpos>127-60 ? When you have the steps figured out, ask a specific question about how to do that step in Java. Good luck. hello! i would like to ask about image processing techniques using visual basic as language. i have a hard time filtering my thresholded image. there are many images caught in my cam and i want that the only two circles which serves as the marker of our object to be remained. what is the best way of solving it? i hope somebody can help me. our project checking is a stone throw away and i really nid to finish this. thank u Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?146042-help-regarding-image-processing&p=434944
CC-MAIN-2015-11
refinedweb
1,192
68.67
> On May 21, 2015, 12:29 p.m., Timothy Chen wrote: > > src/slave/containerizer/provisioners/appc/bind_backend.hpp, line 60 > > <> > > > > I think we typcically add namespaces in cpp files to avoid process:: > > and std:: everywhere, just a nit. Advertising This is currently header-only. Will correct if moved to source. - Ian ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: ----------------------------------------------------------- On May 19, 2015, 11:46 a.m., Ian Downes wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > > ----------------------------------------------------------- > > (Updated May 19, 2015, 11:46 a.m.) > > > Review request for mesos, Chi Zhang, Paul Brett, Timothy Chen, and Vinod Kone. > > > > ----- > > src/Makefile.am 34755cf795391c9b8051a5e4acc6caf844984496 > > >
https://www.mail-archive.com/reviews@mesos.apache.org/msg03432.html
CC-MAIN-2016-44
refinedweb
107
53.88
image-sequencer3.3.1 • Public • Published Image SequencerImage Sequencer - Latest Stable Demo: - Latest Beta Demo: - Stable Branch: WhyWhy Image Sequencer is different from other image processing systems because run through the same sequence of steps - works identically in the browser, on Node.js, and on the command line The following diagrams attempt to explain how the applications various components interconnect: It also for prototypes other related ideas: - filter-like image processing -- apply a transform to an image from a given source, like a proxy. I.e. every image tile of a satellite imagery web map - test-based image processing -- the ability to create a sequence of steps that do the same task as other image processing tools, provable with example before/after images to compare with - logging each step -- to produce an evidentiary record of modifications to an original image - cascading changes -- change an earlier step's settings, and see those changes affect later steps - "small modules" -- based extensibility: see Contributing ExamplesExamples A diagram of this running 5 steps on a single sample image may help explain how it works: - Installation - Quick Usage - CLI Usage - Classic Usage - Method Chaining - Multiple Images - Creating a User Interface - Contributing - Submit a Module - Get Demo Bookmarklet InstallationInstallation This library conveniently works in the browser, in Node, and on the command line (CLI). Unix based platformsUnix based platforms You can set up a local environment to test the UI with sudo npm run setup followed by npm start. WindowsWindows Our npm scripts do not support windows shells, please run the following snippet in PowerShell. npm i ; npm i -g grunt grunt-cli ; grunt build; grunt serve In case of a port conflict please run the following npm i -g http-server ; http-server -p 3000 BrowserBrowser Just include image-sequencer.min.js in the Head section of your web page. See the demo here! Node (via NPM)Node (via NPM) (You must have NPM for this) Add image-sequencer to your list of dependencies and run npm install CLICLI Globally install Image Sequencer: $ npm install image-sequencer -g (You should have Node.JS and NPM for this.) To run the debug scriptTo run the debug script $ npm run debug invert Quick UsageQuick Usage Initializing the SequencerInitializing the Sequencer The Image Sequencer Library exports a function ImageSequencer which initializes a sequencer. var sequencer = ; Image Sequencer can be used to run modules on an HTML Image Element using the replaceImage method, which accepts two parameters - selector and steps. selector is a CSS selector. If it matches multiple images, all images will be modified. steps may be the name of a module or array of names of modules. Note: Browser CORS Restrictions apply. Some browsers may not allow local images from other folders, and throw a Security Error instead. sequencer; optional_options allows passing additional arguments to the module itself. For example: sequencer;sequencer; Data URL usageData URL usage Since Image Sequencer uses data-urls, you can initiate a new sequence by providing an image in the data URL format, which will import into the demo and run: Try this example link with a very small Data URL To produce a data URL from an HTML image, see this nice blog post with example code. CLI UsageCLI Usage Image Sequencer also provides a CLI for applying operations to local files. The CLI takes the following arguments: -i | --image [PATH/URL] | Input image URL. (required) -s | --step [step-name] | Name of the step to be added. (required) -b | --basic | Basic mode only outputs the final image -o | --output [PATH] | Directory where output will be stored. (optional) -c | --config {object} | Options for the step. (optional) --save-sequence [string] | Name space separated with Stringified sequence to save --install-module [string] | Module name space seaprated npm package name The basic format for using the CLI is as follows: $ ./index.js -i [PATH] -s step-name NOTE: On Windows you'll have to use node index.js instead of ./index.js. The CLI also can take multiple steps at once, like so: $ ./index.js -i [PATH] -s "step-name-1 step-name-2 ..." But for this, double quotes must wrap the space-separated steps. Options for the steps can be passed in one line as JSON in the details option like $ ./index.js -i [PATH] -s "brightness" -c '{"brightness":50}' Or the values can be given through the terminal prompt like save-sequence option can be used to save a sequence and the associated options for later usage. You should provide a string which contains a name of the sequence space separated from the sequence of steps which constitute it. sequencer --save-sequence "invert-colormap invert(),colormap()" install-module option can be used to install new modules from npm. You can register this module in your sequencer with a custom namespace separated with the npm package name. Below is an example of the image-sequencer-invert module. sequencer --install-module "invert image-sequencer-invert" The CLI is also chainable with other commands using && sequencer -i <Image Path> -s <steps> && mv <Output Image Path> <New path> Classic UsageClassic Usage Initializing the SequencerInitializing the Sequencer The Image Sequencer Library exports a function ImageSequencer which initializes a sequencer. var sequencer = ; Loading an Image into the SequencerLoading an Image into the Sequencer The loadImage method is used to load an image into the sequencer. It accepts an image src, either a URL or a data-url. The method also accepts an optional callback. sequencer; On Node.js the image_src may be a DataURI or a local path or a URL. On browsers, it may be a DatURI, a local image or a URL (Unless this violates CORS Restrictions). To sum up, these are accepted: - Images in the same domain (or directory - for a local implementation) - CORS-Proof images in another domain. - DataURLs return value: none (A callback should be used to ensure the image gets loaded) The callback is called within the scope of a sequencer. For example: (addSteps is defined later) sequencer; The this refers to all the images added in the parent loadImages function only. In this case, only 'SRC'. Adding steps to the imageAdding steps to the image The addSteps method is used to add steps to the image. One or more steps can be added at a time. Each step is called a module. sequencer; If only one module is to be added, modules is simply the name of the module. If multiple images are to be added, modules is an array, which holds the names of modules to be added, in that particular order. optional_otions is just an optional parameter, in object form, which you might want to provide to the modules. A variety of syntaxes are supported by Image Sequencer to add multiple steps and configurations quickly for module chaining. The project supports the string syntax, designed to be compact and URL friendly, and JSON, for handling more complex sequences. This can be achieved by passing strings to sequencer.addStep(): sequencer;sequencer; For passing default configurations ({} is optional): sequencer; For passing custom configurations: sequencer; For passing multiple custom configurations: sequencer For passing multiple custom configurable modules: sequencer return value: sequencer (To allow method chaining) Running the SequencerRunning the Sequencer Once all steps are added, This method is used to generate the output of all these modules. sequencer; Sequencer can be run with a custom config object // The config object enables custom progress bars in a node environment and// ability to run the sequencer from a particular index(of the steps array)sequencer; The config object can have the following keys config:progressObj: //A custom object to handle progress barindex: //Index to run the sequencer from (defaults to 0) Additionally, an optional callback function can be passed to this method. sequencer;sequencer; return value: sequencer (To allow method chaining) Removing a step from the sequencerRemoving a step from the sequencer The removeSteps method is used to remove unwanted steps from the sequencer. It accepts the index of the step as an input or an array of the unwanted indices, if there are more than one. For example, if the modules ['ndvi-red','crop','invert'] were added in this order, and I wanted to remove 'crop' and 'invert', I can either do this: sequencer;sequencer; or: sequencer; return value: sequencer (To allow method chaining) Inserting a step in between the sequencerInserting a step in between the sequencer The insertSteps method can be used to insert one or more steps at a given index in the sequencer. It accepts the index where the module is to be inserted, the name of the module, and an optional options parameter. index is the index of the inserted step. Only one step can be inserted at a time. optional_options plays the same role it played in addSteps. Indexes can be negative. Negative sign with an index means that counting will be done in reverse order. If the index is out of bounds, the counting will wrap in the original direction of counting. So, an index of -1 means the module is inserted at the end. sequencer; return value: sequencer (To allow method chaining) Importing an independent moduleImporting an independent module The loadNewModule method can be used to import a new module inside the sequencer. Modules can be downloaded via npm, yarn or CDN and are imported with a custom name. If you wish to load a new module at runtime, it will need to avoid using require() -- unless it is compiled with a system like browserify or webpack. const module =sequencer; Method ChainingMethod Chaining Methods can be chained on the Image Sequencer: - loadImage()/loadImages() can only terminate a chain. - run() can not be in the middle of the chain. - If the chain starts with loadImage() or loadImages(), the following methods are applied only to the newly loaded images. Valid Chains: sequencersequencer;et cetra Invalid Chains: sequencer; Fetching current stepsFetching current steps The getSteps method can be used to get the array of current steps in this instance of sequencer.For example sequencer returns an array of steps associated with the current sequencer. Saving SequencesSaving Sequences IMAGE SEQUENCER supports saving a sequence of modules and their associated settings in a simple string syntax. These sequences can be saved in the local storage of the browser and inside a JSON file in node.js. sequences can be saved in node context using the CLI option --save-sequence "name stringified-sequence" In Node and the browser the following function can be used sequencer The function sequencer.loadModules() reloads the modules and the saved sequences into sequencer.modules and sequencer.sequences String syntaxString syntax Image sequencer supports stringifying a sequence which is appended to the url and hence can then be shared. An example below shows the string syntax for channel and invert module channel{channel:green},invert{} The use of () in place of {} for backward compatibility with older links is now deprecated. (There is no longer support for the following syntax, and should be avoided) channel(channel:green),invert() Following are the core API functions that can be used to stringify and jsonify steps. sequencer //returns the stringified sequence of current stepssequencer // returns the JSON for the current sequencesequencer // returns the JSON for given stringified sequencesequencer //Imports the sequence of steps into sequencersequencer //Imports the given sequence of JSON steps into sequencer Image Sequencer can also generate a string for usage in the CLI for the current sequence of steps: sequencer Importing steps using JSON arrayImporting steps using JSON array Image sequencer provides the following core API function to import the given sequence of JSON steps into sequencer. sequencer It can be implemented the following way for example: sequencer; where name is the name of step to be added, options object can be the one used to provide various params to the sequencer which can customise the default ones. To see this in action, please refer to line # 51 of the following: test/core/modules/import-export.js Creating a User InterfaceCreating a User Interface Image Sequencer provides the following events which can be used to generate a UI: onSetup: this event is triggered when a new module is set up. This can be used, for instance, to generate a DIV element to store the generated image for that step. onDraw: This event is triggered when Image Sequencer starts drawing the output for a module. This can be used, for instance, to overlay a loading GIF over the DIV generated above. onComplete: This event is triggered when Image Sequencer has drawn the output for a module. This can be used, for instance, to update the DIV with the new image and remove the loading GIF generated above. onRemove: This event is triggered when a module is removed. This can be used, for instance, to remove the DIV generated above. notify: This event is triggered whenever we need to shoot a notification to the user-interface.For example when the step is not available, we can shoot a notification, by sending appropriate message.For HTML UI it adds a DOM node to the browser, for CLI and node , it logs the notification output to the respective console. How to define these functions: sequencer; These methods can be defined and re-defined at any time, but it is advisable to set them before any module is added and not change it thereafter. This is because the setUI method will only affect the modules added after setUI is called. The onComplete event is passed on the output of the module. Image Sequencer provides a namespace step for the purpose of UI Creation in the scope of these definable function. This namespace has the following predefined properties: step.name: (String) Name of the step step.ID: (Number) An ID given to every step of the sequencer, unique throughout. step.imageName: (String) Name of the image the step is applied to. step.output: (DataURL String) Output of the step. step.inBrowser: (Boolean) Whether the client is a browser or not In addition to these, one might define their own properties, which shall be accessible across all the event scopes of that step. For example : sequencer; Using multiple images on same sequencer:Using multiple images on same sequencer: Image Sequencer object supports one imageURL at a time. Adding a seccond image to same sequencer will result to adding same set of steps added to prior image and flushing out the previous one. s1 = ...;s1;s1;s1;s1;s1; However if we want to use more than one image, we can either initialize a sequencer for each image like: sequencer1 = ...;sequencer1;sequencer1;sequencer1;sequencer2 = ...;sequencer2;sequencer2;sequencer2; Note: Details of all modules can be sought using sequencer.modulesInfo(). This method returns an object which defines the name and inputs of the modules. If a module name (hyphenated) is passed in the method, then only the details of that module are returned. The notify function takes two parameters msg and id, former being the message to be displayed on console (in case of CLI and node ) and a HTML component(in browser). The id is optional and is useful for HTML interface to give appropriate IDs. Keywords install npm i image-sequencer weekly downloads 203 version 3.3.1 license GPL-3.0
https://www.npmjs.com/package/image-sequencer
CC-MAIN-2019-22
refinedweb
2,534
50.77
The following form allows you to view linux man pages. Standard C Library (libc, -lc) #include <aio.h> int aio_suspend(const struct aiocb *const iocbs[], int niocb, const struct timespec *timeout);. If one or more of the specified asynchronous I/O requests have completed, aio_suspend() returns 0. Otherwise it returns -1 and sets errno to indi- cate the error, as enumerated below.. aio_cancel(2), aio_error(2), aio_return(2), aio_waitcomplete(2), aio_write(2), aio(4) The aio_suspend() system call is expected to conform to the IEEE Std 1003.1 ("POSIX.1") standard. The aio_suspend() system call first appeared in FreeBSD 3.0. This manual page was written by Wes Peters <wes@softweyr.com>. webmaster@linuxguruz.com
http://www.linuxguruz.com/man-pages/aio_suspend/
CC-MAIN-2017-51
refinedweb
115
50.43
From: William E. Kempf (wekempf_at_[hidden]) Date: 2003-02-05 16:05:20 > On Wednesday, February 05, 2003 3:04 PM [GMT+1=CET], > William E. Kempf <wekempf_at_[hidden]> wrote: > >> > What I would like to see is a new boost::thread implementation >> which meets the following requirements. >> > >> > a. There shall be two interfaces to a thread. One for creation of a >> thread, from >> > here on called boost::thread. And, one for the created thread, >> from >> > here on called boost::thread::self. >> >> Self? Why? If it's on operation that can *only* be made on the >> current thread, then a static method is a better approach. Otherwise, >> I could make a "self" instance and pass it to another thread, which >> could then attempt an operation that's not valid for calling on >> another thread. > > It would seem to me that, given the availability of p->yield() as a > syntax for invoking a static function, it'd be better to use a > namespace-scope function to avoid errors and for clarity. OK, I can buy that over a seperate self class. This was discussed at one point, but the particular issue with p->yield() was never brought up. I'm not sure I find it compelling, because which thread yields should be evident from the documentation, and I don't see anyone ever using this syntax. But compelling or not, I'm not opposed to making this a free function if others think it's clearer in this regard. -- William E. Kempf wekempf_at_[hidden] Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2003/02/43882.php
CC-MAIN-2021-10
refinedweb
274
73.78
15.4. Practice Exam 4 for the AP CS A Exam¶ The following 20 questions are similar to what you might see on the AP CS A exam. Please answer each to the best of your ability. Click the button when you are ready to begin the exam, but only then as you can only take the exam once. Click on the button to go to the next question. Click on the button to go to the previous question. Use the number buttons to jump to a particular question. Click the button to pause the exam (you will not be able to see the questions when the exam is paused). Click on the button after you have answered all the questions. The number correct, number wrong, and number skipped will be displayed. - All three are valid - If there is not a call to super as the first line in a child class constructor then super() is automatically added. However, this will cause a problem if the parent class does not have a no argument constructor. - II only - While II is valid so is another choice. - III only - While III is valid so is another choice. - II and III - Since C1 has constructors that take just an int and just a String both of these are valid. - None are valid - C2 constructors can call C1 constructors using the super keyword. In fact this call is automatically added to C2 constructors as the first line in any C2 constructor if it isn't there. - x != y - If we assume that x is not equal to y then the expression is (false && true) || (true && false) which is false. - x == y - If we assume that x == y is the same than using it in the full expression should return true. But, if x is equal to y you would get (true && false) || (false && true) which is false. - true - How can this be true? Remember that && requires both expressions to be true in order to return true. You can think of (x==y && !(x==y)) as A && !A which is always false. You can think of ( x!=y && !(x!=y) as B && !B which is always false. - false - This can be simplified to (A && !A) || (B && !B) which is (false || false) which is false. You can think of (x==y && !(x==y)) as A && !A which is always false. You can think of ( x!=y && !(x!=y) as B && !B which is always false. - x < y - Since this expression is only about equality how could this be true? - if (a[savedIndex] > a[j]) { j = savedIndex; } - Should j be set to the savedIndex? - if (a[j] > a[savedIndex]) { savedIndex = j;} - This is a selection sort that is starting at end of the array and finding the largest value in the rest of the array and swapping it with the current index. - if (a[j] < a[savedIndex]) { savedIndex = j; } - This would be correct if this was starting at index 0 and finding the smallest item in the rest of the array, but this starts at the end of the array instead and finds the largest value in the rest of the array. - if (a[j] > a[savedIndex]) { j = savedIndex;} - Should j be set to the savedIndex? - if (a[j] == a[savedIndex]) { savedIndex = j; } - Why would you want to change the savedIndex if the values are the same? - II only - Methods in an interface are abstract, but more of these choices are correct. - III only - Methods in an interface are public, but more of these choices are correct. - I and II only - Can you declare private methods in an interface? - I, II, and III - One interface can inherit from another and the methods in an interface are public and abstract. - I only - One interface can inherit from another, but more of these choices are correct. - { {4, -5, 6},{-1, -2, 3} } - How did the values in row1 change to those in row2 and vice versa? Why didn't any value change to the absolute value? - { {4, 5, 6},{1, 2, 3} } - How did the values in row1 change to those in row2 and vice versa? - { {1, 2, 3},{4, 5, 6} } - This would be true if all the matrix values were changed to their absolute value. But, this only happens when the row and column index are the same. - { {-1, -2, 3},{4, -5, 6} } - This would be true if none of the values in the matrix were changed. But, this will change the value to the absolute value when the row and column index are the same. - { {1, -2, 3},{4, 5, 6} } - This only changes the value in the matrix if the row and column index are the same. So this changes the values at (0,0) and (1,1). - a = 4 and b = 3 - This would be true if the for loop stopped when i was equal to 4. - a = 7 and b = 0 - Here are the values of a and b at the end of each loop: i=1, a=3, b=4; i=2, a=6, b=3; i=3, a=4, b=3; i=4; a=7; b=0; - a = 2 and b = -2 - Go back and check your values each time through the loop. - a = 5 and b = 2 - This would be true if the loop stopped when i was equal to 6, but it stops when i is equal to 5. - a = 9 and b = 2 - Keep a table of the variables and their values each time through the loop. - 243 - This would be true if it was mystery(5). - 0 - How can this be? The value 0 is never returned. - 3 - Did you notice the recursive call? - 81 - This is the same as 3 to the 4th power (3 * 3 * 3 * 3 = 81). - 27 - This would be true if it was mystery(3). - {3,6,8,5,1}, {3,5,6,8,1}, {1,3,5,6,8} - This is almost right, but there should be 4 of these steps. - {1,3,8,5,6}, {1,3,8,5,6}, {1,3,5,8,6}, {1,3,5,6,8} - This is selection sort, not insertion. Selection will find the smallest and swap it with the first element in the array. - {3,6,8,5,1}, {3,6,8,5,1}, {3,5,6,8,1}, {1,3,5,6,8} - An insertion sort will skip the first position and then loop inserting the next item into the correct place in the sorted elements to the left of the current item. - {1,3,8,5,6}, {1,3,5,8,6}, {1,3,5,6,8} - This is selection sort, not insertion and it is also an incorrect selection sort since it skips one step. - {1,6,3,8,5}, {1,3,6,8,5}, {1,3,5,6,8} - This doesn't match selection, insertion, or merge sort. - 21 - The general formula for the number times a loop executes is the last value - the first value + 1. The outer loop will execute 3 times (2-0+1) and the inner loop will execute 7 times (7-1+1) so the total is 3 * 7 = 21. - 18 - This would be true if the inner loop stopped when j equals 7. - 32 - This would be true if the outer loop executed 4 times and the inner loop 8, but is that right? - 28 - This would be true if the outer loop executed 4 times, but is that right? - 10 - This would be true if you added the number of times the outer loop executes and the number of times the inner loop executes, but you multiply them. - A - This will only print if both num1 and num2 are greater than 0 and num1 is greater than num2. - B - This will only print if both num1 and num2 are greater than 0 and num1 is equal to or less than num2. - C - This will only print if both num1 and num2 are less than 0. - D - This will only print if num2 is less than 0 and num1 is greater than or equal to 0. - E - The first test will fail since num1 is less than 0, the second test will fail since num2 is greater than 0, the third test will also fail since num2 is greater than 0, which leads to the else being executed. - hi there - This would be true if we asked what the value of s3 was. - HI THERE - This would be true if we asked what the value of s2 was. - Hi There - Strings are immutable in Java which means they never change. Any method that looks like it changes a string returns a new string object. Since s1 was never changed to refer to a different string it stays the same. - null - This would be true if we asked what the value of s4 was. - hI tHERE - How could this have happened? - mp - A substring of (0,3) will have 3 characters in it (index 0, index 1, and index 2). - mpu - Remember that substring with two numbers starts at the first index and ends before the second. So s1 = Computer, s2 = mputer, s3 = mpu - mpur - A substring of (0,3) will have 3 characters in it (index 0, index 1, and index 2). - omp - Remember that the first character in a string object is at index 0. - om - A substring of (0,3) will have 3 characters in it (index 0, index 1, and index 2). - Book b = new Book(); - A object can always be declared to be of the type of the class that creates it. - Dictionary d = new Book(); - The declared type must the the type of the class that creates the object or the type of any parent class. Dictionary is not a parent of the Book class. - Comparable c = new Book(); - An object can be declared to be of an interface type if the interface type is one of the parent classes of the actual type. - Book b = new Dictionary (); - The declared type can be the actual type (the class that creates the object) or any parent of the actual type. - Comparable c = new Dictionary(); - Since Dictionary inherits from Book and Book implements the Comparable interface, this is allowed. - 2 - This would be true if the recursion stopped when you first the first non "x", but is that what happens? - 5 - This returns the number of "x"'s it finds in the str. - 1 - Did you notice the recursive calls? - 4 - How does it miss one "x"? - 0 - Since the first character is "x" how can this be true? - The value is the first one in the array - This could take a long time, but there is an answer that takes longer. - The value is in the middle of the array - This would be true if we were looking for the shortest execution of a binary search - The value is at index 1 in the array - This would be the second value checked if the value at the middle is greater than the desired value. - The value isn’t in the array - This will always take the longest when you are doing binary search. - The value is at index 6 in the array - This would be the second value checked if the value at the middle is less than the desired value. - Awk Awk Awk Awk Awk - This would be true if none of the children classes overrode the speak method, but many do. - This won’t compile - It is always okay to substitute a child object for a parent object. - Meow Moo Woof Oink Tweet - This would be true if Pig had a speak method that returned "Oink" and Bird had a speak method that returned "Tweet", but they do not. The inherited speak method will be called in Animal. - Meow Moo Woof Oink Awk - This would be true if Pig had a speak method that returned "Oink", but it does not. - Meow Moo Woof Awk Awk - Both Pig and Bird do not have a speak method so the one in Animal will be used. - 4 in base 8 - You can't just subtract the two numbers since they are in different bases. Convert both to decimal first. - 4 in base 16 - You can't just subtract the two numbers since they are in different bases. Convert both to decimal first. - 00001100 in base 2 - 17 in base 16 is 23 in base 10. 13 in base 8 is 11 in base 10. The answer is 12 in base 10 which is 00001100 in base 2. - 00000010 in base 2 - This is 2 in base 10. Convert both numbers to decimal and then convert the answer to binary. - 4 in base 10 - You can't just subtract the two numbers since they are in different bases. Convert both to decimal first. - s={3, 8}; b=4; - The value of a[1] will be doubled since passing a copy of the value of s is a copy of the reference to the array. The value in b won't change since y will be set to a copy of b's value which is just a number. - s={3, 4}; b=4; - What about a[1] = a[1] * 2? - s={6, 4}; b=4; - Remember that the first index in an array is index 0. This code will double the second value in the array (the one at index 1). - s={3, 8}; b=8; - Java passes arguments by creating a copy of the current value so the value of b won't be affected by changes to y. - s={6, 8}; b=8; - Java passes arguments by creating a copy of the current value so the value of b won't be affected by changes to y. - I only - This is true, but at least one other thing is true as well. - II only - This is true, but at least one other thing is true as well. - III only - Selection sort always takes the same amount of time to execute. - I and II only - Mergesort does use recursion (has a method that calls itself). Insertion sort does take longer to execute when the items to be sorted are in ascending order and you want them in descending order. - I, II, and III - Selection sort always takes the same amount of time to execute. - The method is recursive and the first call it will compare 3 to 5 and then do mystery(3,4,5). - 1 - There are two calls: mystery(0, 4, 5) and mystery(3, 4, 5). - 2 - This would be true if it was mystery(0, 4, 7); - 3 - This would be true if we were looking for a number that isn't in the array. - 4 - At most this will take log base 2 of the size of the array plus one to determine that the desired value isn't in the array. - 5 15-4-1: Consider the following partial class definitions. Which of the constructors shown below (I, II, and III) are valid for C2? public class C1 { private int num; private String name; public C1(int theNum) { num = theNum; } public C1(String theName) { name = theName; } // other methods not shown } public class C2 extends C1 { // methods not shown } Possible constructors I. public C2 () { } II. public C2 (int quan) {super (quan); } III. public C2 (String label) { super(label); } 15-4-2: The Boolean expression (x==y && !(x==y)) || ( x!=y && !(x!=y)) can be simplified to which of the following? 15-4-3: Which of the following could be used to replace the missing code so that the method sort will sort the array a in ascending order? public static void sort(int[] a) { int maxCompare = a.length - 1; int savedIndex = 0; int numSteps = 0; int temp = 0; for (int i = maxCompare; i > 0; i--) { savedIndex = i; for (int j = i - 1; j >= 0; j--) { /* missing code */ } temp = a[i]; a[i] = a[savedIndex]; a[savedIndex] = temp; } } 15-4-4: Which of the following statements about interfaces is (are) true? I. One interface can inherit from another II. All methods declared in an interface are abstract methods (can’t have a method body). III. All methods declared in an interface are public methods. 15-4-5: Consider the following declarations. If matrix is initialized to be: { {-1, -2, 3},{4, -5, 6} }. What will the values in matrix be after changeMatrix(matrix) is called? int[][] matrix = new int[2][3]; public static void changeMatrix(int[][] matrix ) { for (int row = 0; row < matrix.length; row++) for(int col = 0; col < matrix[row].length; col++) if(row==col) matrix[row][col] = Math.abs(matrix[row][col]); } 15-4-6: What are the values of a and b after the for loop finishes? int a = 5, b = 2, temp; for (int i=1; i<=4; i++) { temp = a; a = i + b; b = temp – i; } 15-4-7: Condsider the following method. What value is returned from a call of mystery(4)? public static int mystery(int n) { if (n == 0) return 1; else return 3 * mystery (n - 1); } 15-4-8: Which of the following correctly shows the iterations of an ascending (from left to right) insertion sort on an array with the following elements: {6,3,8,5,1}? 15-4-9: Consider the following code segment. How many times will a * be printed? for(int i = 0; i < 3; i++) { for(int j = 1; j <= 7; j++) System.out.println("*"); } 15-4-10: Consider the following method. What is the output from conditionTest(-3,2)? public static void conditionTest(int num1, int num2) { if ((num1 > 0) && (num2 > 0)) { if (num1 > num2) System.out.println("A"); else System.out.println("B"); } else if ((num2 < 0) && (num1 < 0)) { System.out.println("C"); } else if (num2 < 0) { System.out.println("D"); } else { System.out.println("E"); } } 15-4-11: What is value of s1 after the code below executes? String s1 = "Hi There"; String s2 = s1; String s3 = s2; String s4 = s1; s2 = s2.toUpperCase(); s3 = s3.toLowerCase(); s4 = null; 15-4-12: What is the output from the following code? String s = "Computer Science is fun!"; String s1 = s.substring(0,8); String s2 = s1.substring(2); String s3 = s2.substring(0,3); System.out.println(s3); 15-4-13: Given the following class declarations, which declaration below will result in a compiler error? public class Book implements Comparable { // code for class } public class Dictionary extends Book { // code for class } 15-4-14: What will the method below return when called with mystery(“xxzxyxx”)? public static int mystery(String str) { if (str.length() == 0) return 0; else { if (str.substring(0,1).equals("x")) return 1 + mystery(str.substring(1)); else return mystery(str.substring(1)); } } 15-4-15: Which will cause the longest execution of a binary search looking for a value in an array of 9 integers? 15-4-16: Given the following array declaration and the fact that Animal is the parent class for Bird, Dog, Pig, Cat, and Cow, what is output from looping through this array of animals and asking each object to speak()? Animal[] a = { new Cat(), new Cow(), new Dog(), new Pig(), new Bird() } Animal that has a method speak() which returns "Awk". Bird doesn’t have a speak method Dog has a speak method that returns “Woof” Pig doesn’t have a speak method Cow has a speak method that returns “Moo” Cat has a speak method that returns "Meow" 15-4-17: What is the result of 17 (in base 16) - 13 (in base 8)? 15-4-18: Consider the following method and code. What are the values of s and b after the following has executed? public static void test(int[] a, int y) { if (a.length > 1) a[1] = a[1] * 2; y = y * 2; } int[] s = {3,4}; int b = 4; test(s,b); 15-4-19: Which of the following is (are) true? I. Insertion sort takes longer when the array is sorted in ascending order and you want it sorted in descending order. II. Mergesort uses recursion. III. Selection sort takes less time to execute if the array is already sorted in the correct order. 15-4-20: Given the following code, how many calls to mystery are made (including the first call) when mystery(0, 4, 5) is executed when arr = {1, 2, 3, 5, 7}? private int[] arr; public int mystery(int low, int high, int num) { int mid = (low+high) / 2; if (low > high) { return -1; } else if (arr[mid] < num) { return mystery(mid +1, high, num); } else if (arr[mid] > num) { return mystery(low, mid - 1, num); } else return mid; }
https://runestone.academy/runestone/books/published/apcsareview/TimedTests/test4.html
CC-MAIN-2020-05
refinedweb
3,451
81.02
Converting Neural Network To TensorRT . Part 1 Using Existing Plugins. What is TensorRT?. NVIDIA has really good tutorials there! Example that described in this post has following inference times. - 30 milliseconds TensorFlow - 18 milliseconds TensorRT with FP32 (Floating point 32bit) - 6 milliseconds TensorRT with FP16 (Floating point 16bit) As you can see final result is 5x times!!! faster than pure TensorFlow. Blog post divided into two parts: Part 1 (this): describes overall workflow and shows how to use existing TensorRT plugins. Part 2 (link): shows how to create custom TensorRT layer/plugin. Selecting a neural network As an example i’ll take TensorFlow/Keras based network for 3D bounding boxes estimation (code, paper). Why this network? - l2_normalize operation, which is not supported by TensorRT (we will build a custom plugin for it) - LeakyRelu operation can be replaced with official LRelu_TRT plugin, but unsupported by default. More about official plugins below. - Flatten layer, which is able to silently confuse TensorRT and completely ruin network quality. This layer will be replaced with Reshape. General workflow First thing to do is to freeze and optimize your graph . Resulted prorobuf (.pb) file will be used by next steps. This script can be used to freeze graph. After graph has been frozen: - Convert TensorFlow protobuf to UFF format (seems UFF used only in TensorRT) - Convert UFF to TensorRT plan (.engine) INFO: TensorRT plan is a serialized binary data compiled exclusively for specific hardware type (i.e. plan for Jetson TX2 only works on Jetson TX2). For PyTorch, Caffe or other frameworks workflow is a bit different and not covered here. In general, both steps can be done with one python script. But because some TensorRT API functions are not available via Python API (e.g. Deel Learning Accelerator related), most of the NVIDIA official scripts use C++ for the second step. For my plugin (Part 2) I also will use C++ script (feel free to make a PR with pybind to make it work with Python ). As baseline example (save it for later ;)), take a look at the following python script that converts our network (VGG16 backbone) to TensorRT plan. To avoid custom layers l2_normalize operations are removed with “dynamic_graph.remove(node)” (covered in Part 2). But before this script can actually work, few modifications in the original network are required. Replacing ambiguous operations Even if TensorFlow/Keras operation is supported by TensorRT sometimes it can work in unexpected manner. Flatten operation do not specify exact parameters — (?, ?) — (i.e. depends on input) and for some reason this confuses TensorRT. By default this messes up NN output exactly after this operation. This can be related to the fact that TensorFlow uses NHWC and TensorRT uses NCHW dimentions order. N-number of batches, H — height, W-width, C-channels. So we have to specify exactly what we want and do not rely on TensorRT assumptions about flatten behavior. - x = Flatten()(vgg16_model.output) + x = Reshape((25088,))(vgg16_model.output) Reshape with “-1”. TensorRT complains if “-1” is used in reshape operation. In TensorFlow “-1" is a specific case, when exact value will be computed based on input shape. UFFParser: Parser error: reshape_2/Reshape: Reshape: -1 dimension specified more than 1 time This can be solved by using exact shapes - orientation = Reshape((bin_num, -1))(orientation) + orientation = Reshape((bin_num, 2))(orientation) NOTE: After above changes we don’t need to retrain network as operations are equal. Replacing LeakyRelu with TensorRT plugin Here is a bit the tricky part. Despite the fact that there is existing plugin for LeakyRelu, it’s not obvious (from docs) how to use it. We need a surgeon. Turns out this is a pretty simple operation called “collapse_namespaces” by graph surgeon. Graph surgeon as name states is special utility intended to cure graphs via surgery. With graph surgeon you can remove/append/replace nodes. “collapse_namespaces” is intended to replace nodes, that’s it, we specify original node name and prepare new replacement node and collapse_namespaces will do the surgery. “negSlope” is alpha coefficient in LeakyRelu. Different plugins can have different attributes. Under the hood, if we look on original box3d.pbtxt and box3d-uff.pbtxt (not generated here), original LeakyRelu node (.pbtxt) will be replaced with new plugin node. Original TensorFlow node. node { name: "leaky_re_lu_2/LeakyRelu" op: "LeakyRelu" input: "dense_3/BiasAdd" attr { key: "T" value { type: DT_FLOAT } } attr { key: "alpha" value { f: 0.10000000149011612 } } } New TRT plugin node. nodes { id: "leaky_re_lu_2/LeakyRelu" inputs: "dense_3/BiasAdd" operation: "_LReLU_TRT" fields { key: "negSlope_u_float" value { d: 0.1 } } } IMPORTANT: Do not forget to initialize TRT plugins, otherwise TRT will not be able to recognize _LRelu_TRT. Python trt.init_libnvinfer_plugins(G_LOGGER, “”) C++ nvinfer1::initLibNvInferPlugins(&gLogger, “”) All other TRT plugins listed here. All supported operations/layers here. In Part 2, I will describe how to create a custom plugin. Environment - Nvidia Jetson AGX Xavier - Jetpack 4.2 (CUDA 10.0.166, TensorRT 5.0.6) - TensorFlow 1.13.1
https://r7vme.medium.com/converting-neural-network-to-tensorrt-part-1-using-existing-plugins-edd9c2b9e42a
CC-MAIN-2022-33
refinedweb
815
51.44
This is the mail archive of the libstdc++@sources.redhat.com mailing list for the libstdc++ project. Yesterday during the Cygnus network outage, I got most of v3 to compile. I needed to define _XOPEN_SOURCE=500 and _LARGE_FILE_API (for stat64) which means that something is confusing the normal AIX mechanism which handles that.. Also, the ctype files in config/aix are not up to date with respect to the rest of the v3 sources. I got a little farther by copying and pasting from more up to date versions, but I don't know what I am doing. The AIX version of atomicity.h worked fine with src/string-inst.cc with one additional definition (_Atomic_word) and a typo fix. The revised version is appended below. David /* Low-level functions for atomic operations. AIX version. Copyright (C) 2000. */ #ifndef _BITS_ATOMICITY_H #define _BITS_ATOMICITY_H 1 /* Should this be type long so 64-bit word in 64-bit mode? */ typedef int _Atomic_word; #include <sys/atomic_op.h> static inline int __attribute__ ((unused)) __exchange_and_add (atomic_p __mem, int __val) { return fetch_and_add (__mem, __val); } static inline void __attribute__ ((unused)) __atomic_add (atomic_p __mem, int __val) { (void) fetch_and_add (__mem, __val); } static inline int __attribute__ ((unused)) __compare_and_swap (atomic_l __p, long int __oldval, long int __newval) { return compare_and_swaplp (__p, &__oldval, __newval); } static inline long __attribute__ ((unused)) __always_swap (atomic_l __p, long int __newval) { long __val = *__p; while (! compare_and_swaplp (__p, &__val, __newval)) /* EMPTY */; return __val; } static inline int __attribute__ ((unused)) __test_and_set (atomic_l __p, long int __newval) { long __val = 0; (void) compare_and_swaplp (__p, &__val, __newval); return __val; } #endif /* atomicity.h */
http://gcc.gnu.org/ml/libstdc++/2000-09/msg00114.html
crawl-001
refinedweb
256
56.25
About namespaces and namespace use I wanna give some unsolicited advice. Don't use the default namespace. Just don't. There are reasons. Namespaces are fundamental to how kubernetes is designed and the implied behaviour of the objects used in kubernetes. The term default will not serve you in the context (pun intended) of designing kubernetes clusters. There are two basic types of object states in this regard - namespaced and Non-namespaced. The implied design is that objects in the default namespace - are NOT namespaced. This has many implications to security and other unpleasant topics. A good motivation for using namespaces is that you can clean up failed complex deployments easily by just deleting the namespace used for that deployment. This allows rapid iteration after the usual typo is found and corrected. So there. I said it. Learn how to use namespaces and never, ever use the default. It is possible to overuse namespaces. Again, there are implications. For each namespace created, there is duplication of objects for etcd. Remember - the essential concept is that you get a virtual, standalone cluster in that namespace. And that has consequences that should be considered in kubernetes architecture design and deployment. Discussions Become a Hackaday.io Member Create an account to leave a comment. Already have an account? Log In.
https://hackaday.io/project/175422-project-dandelion/log/186464-a-flower-by-any-other-name
CC-MAIN-2021-31
refinedweb
217
60.61
I think it would make more sense to define which types can be read from arbitrary memory (even uninitialized) safely. That basically just boils down to all bit patterns being a valid value for the type. This would also reduce the number of unsafe code footguns in low level systems programming in general. I don't know this for sure, but I strongly suspect LLVM has deeply-baked-in assumptions to the effect that uninitialized memory is never read, except maybe via unsigned char, and even then it might just be a special case for memcpy. unsigned char memcpy Does LLVM even have the concept of uninitialized memory? I mean, at most it's just a deliberate annotation. There is no way for code to determine whether memory returned from a function like malloc() is initialized or not, that meaning is attached on another level of abstraction entirely. The principal example here is padding bytes. You can initialize memory by assigning a struct value, and still face the reality that reading it back byte by byte is UB by Rust rules. It's not a sane definition. Another evidence against what you suggest is that C does not consider reading uninitialized memory to be UB, yet clang seems to cope just fine. I am not at all familiar with the innards of LLVM, but I would be very, very surprised if it didn't. That's just not true. The term the C standard uses for uninitialized memory is indeterminate value, and it consistently assumes throughout the normative text that reading an indeterminate value via any type other than unsigned char provokes undefined behavior. There is a technicality in play, because the normative definition of "indeterminate value" is "an unspecified value or a trap representation", which can be understood to imply that an indeterminate value can be read via any type that has no trap representations -- but that is almost certainly a drafting error, and both LLVM (Clang) and GCC will trigger undefined behavior upon reading an indeterminate value via any type other than unsigned char. Note that memcpy and friends are specified to behave as-if they access memory via unsigned char *. unsigned char * So in C, padding bytes are special-cased in several places with the intention that you can always interact with a struct safely (byte-by-byte or otherwise) as long as all of its value fields have been initialized. Specifically, padding bytes have unspecified, but never indeterminate, values. That would be a sensible way to deal with them in Rust if it doesn't already work that way. struct What do you mean exactly by "trigger", and can you show an example of this happening? ... I don't know how to put that any more clearly. What do you want to call it when the compiler starts pruning entire control flow paths because they can never be executed without executing a construct whose behavior is undefined? Sure: int foo(void); int bar(void) { int x; if (foo()) x = 3; return x; } compiles, using GCC 6.3 for x86-64, to bar: subq $8, %rsp call foo@PLT movl $3, %eax addq $8, %rsp ret See how the return value is unconditionally 3? This optimization would not be allowed if reading an uninitialized variable produced a merely "unspecified" value. If you tag the declaration of foo with __attribute__((pure)) (this means "I solemnly swear that foo has no side effects"), it won't even bother calling foo. And, due to a long-standing pass-ordering problem, you don't get a warning about the use of an uninitialized variable! (The lack of a warning is GCC bug 18501.) foo __attribute__((pure)) Clang 3.8 does the same thing (the generated assembly language is not quite the same but that's only because it does the stack alignment a little differently), but at least it does warn about the uninitialized variable. This is not a memory access. Uninitialized locals and uninitialized memory (indirectly accessed through a pointer) have very different properties. In particular, there is no way for compiler to discern what's on the other side of the pointer (apart from the declared type), so it can't make any optimization that take that into account. I understand why you think that, but it's not true anymore. This small modification of the program I posted earlier... #include <stdlib.h> extern int foo(void); int bar(void) { int *x = malloc(sizeof(int)); if (!x) abort(); if (foo()) *x = 3; int r = *x; free(x); return r; } compiles to... bar: pushq %rax callq foo movl $3, %eax popq %rcx retq ... with clang 3.8. (gcc6 isn't this clever.) I suspect clang/LLVM managed to do this by "lowering" the malloc allocation to a local variable, thus converting the program into the previous program, but if there were a distinction between uninitialized locals and uninitialized memory, that transformation would be invalid. malloc To get this to a somewhat more high-level point, LLVM has undef and poison values that both have very special behavior, and in particular, neither of them corresponds to any particular bit pattern. The sad truth is that we cannot have both "everything is a bit pattern" and keep all the fancy optimizations compilers do. Rust will have to have something similar. miri has Undef, which ironically is more like LLVM's poison than LLVM's undef (but that's good, LLVM's undef is pretty crazy). undef poison Undef C decided to treat unsigned char* specially to make it possible to implement memcpy. Rust could do the same with u8, but that somewhat feels wrong -- why u8 and not the other integer types? And this is why @nikomatsakis brought up this union type, which is supposed to express "either T or Undef": unsigned char* u8 T union Uninit<T> { init: T, uninit: () } I concur, char in C tries to be too many different things at once and it just causes trouble. std::mem::byte maybe? char std::mem::byte I might be missing something, but if you can't tell statically whether or not something is initialized, how does a union like that help? If you read init from uninitialized memory, it's still UB, and you have no way of knowing whether it is. Perhaps I should clarify what I'm talking about.Let's go with a very specific example. [#inline(never)] fn memcpy64(dest: &mut [u64], src: &[u64]) { for i in 0..src.len() { dest[i] = src[i]; } } As far as I can tell, when this function compiles, the resulting machine code is always correct, even if the memory that src points to hasn't been touched since the motherboard first powered on. Here, the function must never be inline because there is no other way to express the constraint that compiler doesn't make assumptions about src, but it shows that "uninitialized memory" is a dubious concept. Making those reads "UB by definition" doesn't seem well motivated. There is no actual "uninitialized memory" in a running computer. src inline The idea is that you wouldn't read the init field. You have a memcpy somewhat like this: fn memcpy64(dest: &mut [Uninit<u8>], src: &[Uninit<u8>]) { for i in 0..src.len() { dest[i] = src[i]; } } I have to say though that this does feel pretty hacky. Not sure I am happy with this as the work-around to access Undef. The machine code being correct doesn't mean anything except that the compiler is not "smart" enough to notice all instances of UB. That's true. However, unitialized memory is still a very useful concept in describing the behavior of programs, in particular if we want to consider out programs at a higher level than assembly languages do. I suggest you do some Googling about LLVM's poison and undef, there should be numerous examples out there explaining why the LLVM developers considered it necessary to add these concepts even though they do not "exist" in a running computer. A brief search brought up this note and this first of a series of blog posts, for example. in a running computer, pointers and usize are also "the same" type. If we constrain ourselves by what things are on a running computer, there are many valuable optimizations that we cannot do. For example, we could never move a variable access across a function call -- after all, the unknown function we call could "guess" the address of our local variables in memory, and access them. In a running computer, nothing stops them. But we don't want to permit functions to do that, precisely because we do want to perform optimizations. To this end, when we specify the behavior of programs, we imagine our computer to have "extra bits" that we use to keep track of things like whether something is a pointer or an integer, or whether memory is initialized. I also recently wrote a blog post motivating this kind of "extra state": usize "Really" high-level languages like Java have a much easier time with all of this because they don't allow programmers to ever observe things like uninitalized memory. "Really" low-level languages like Assembly do not permit interesting optimizations, so they can get away with a very bare-bone kind of representation. Languages like C and Rust, on the other hand, want to eat the cake and have it, too -- they want to do optimizations based on high-level intuition while at the same time permitting the programmer to observe low-level details (like the fact that padding bytes exist). Properly doing this is a hard and unsolved problem. The nomicon gives an exhaustive list of UBs in Rust, and it's pretty short. "Reading uninitialized memory" is the odd man out. Every single one of the remaining sources of UB is motivated by pure technical necessity. But "reading uninitialized memory" is not UB by necessity, that one's an arbitrary choice. It's perfectly possible, even simple, to let Rustc emit code with no undefined memory. Think about it. It's already impossible to read or reference undefined locals on Rust level, except for that one ugly intrinsic. If the intrinsic didn't exist, Rust would be one of those "really" high-level languages already. On the other hand, the decision whether or not the heap allocator returns "uninitialized" memory is completely arbitrary metadata, and while it's certainly unsafe to access that memory (interpreting it as certain types is UB), there is nothing that would make it inherently broken. unsafe Edit: To clarify, I'm not arguing against the concept of uninitialized (heap) memory. I'm arguing that the blanket statement "reads of uninitialized memory are UB" doesn't make sense. It should be unsafe, not UB. The result of std::mem::uninitialized() is the obvious exception, and should be considered as a separate category. std::mem::uninitialized() I don't think that's the case. Rust permits you to do all sorts of nefarious things with pointers, things that involve observing the offset of fields and how structs are laid out in memory. Have a look at the code of Rc::from_raw. Another example is HashMap, which allocates one large chunk of memory and then splits it into three pieces, namely, an array of hashes, keys, and values. This is only possible if you take a low-level view of memory, where everything is ultimately just a bunch of bytes. Rc::from_raw HashMap Oh, I see. I misunderstood you then, sorry.What is your stanza on reading uninitialized memory at type bool? That type is an enum, which typically guarantees that the value is either 0 or 1. If you now pass an uninitialized bool somewhere, is it okay for that to be UB? bool enum The point that mem::unitiailized and malloc yield "the same kind of" unitialized memory has already been made above in this thread. Existing compiler transformations will happily turn a malloc into a local variable if they can prove that it is not used after the function returns. I also feel there's little gained from distinguishing the two. mem::unitiailized Definitely UB. There is no limit to the harm you can make by breaking type invariants. This is not strictly limited to fundamental types or uninitialized memory either. Transmute from an initialized byte will do the same thing, and plenty library types have internal invariants that will cause UB if broken. Rust doesn't use malloc() (not necessarily, anyway), and LLVM doesn't either AFAICT. It's just an external function that clang in particular has special optimizations for. Doesn't mean it's a good idea for Rust to treat heap the same way. malloc() I mean, what are the odds that your code just happens to not require the heap allocations you are making, and you never noticed? Less UBs in tricky unsafe code is IMO a gain well worth the distinction. What I meant is, it doesn't allow any way to create undefined values you can legally read (apart from said mem::uninitialized(), and putting aside the ambiguity of heap allocation). Of course you can always use unsafe code, to create pointers to places on stack you shouldn't see, but that's UB by virtue of breaking the aliasing rules I think. mem::uninitialized() The idea of treating heap memory and stack memory differently w.r.t. UB is an interesting one. It goes completely against my intuition, but it took me a while to understand why. I believe the reason why stack and heap memory should be the same in this regard is that all memory should be interchangeable. I don't mean that the optimization discussed earlier (demoting mallocs) should be legal, I mean that I want to choose how I allocate memory solely based on considerations like allocation lifetime and performance and availability (e.g. whether I even have a heap), not by language lawyering about UB. Especially when I'm doing low-level, unsafe work where I treat something as a bag of bits and do dangerous things with it, I don't want to care where in memory those bits reside. I have enough trouble getting the actual bit bashing right. I don't want to track down a misoptimization caused by moving a temporary buffer from a Vec to a stack-allocated array (or the other way around). Vec The virtue of treating heap and stack alike also shows elsewhere: Your proposed memcpy64 is only well-defined (under the proposed "reading uninit heap memory is okay") if it copies from a heap allocation into another heap allocation. IIUC, the inline(never) is supposed to force the compiler to conservatively assume the heap-heap case, but that's not how this works. Language semantics must be defined independently of "what the compiler can see" and if you say "reading uninitialized bits from the stack is UB", then calling memcpy64 with stack buffers as arguments is UB, period. (And this will inevitably be exploited by a compiler author trying to squeeze out a few percent out of a benchmark suite.) So a rule that distinguishes stack and heap memory makes it actually harder to write a correct memcpy, because it's easy to write something that's only UB for stack memory (i.e., where it's even harder to observe the UB). memcpy64 inline(never) Of course, one could invent arbitrary semantics that are aimed at preventing certain reasoning steps by the compiler. For example, one solution would be to draw a line at function boundaries (perhaps only those that are inline(never)) and say that pointer arguments are to be treated "as if" they were referring to heap memory. Besides being very inelegant and throwing the whole object model into disarray, this will inevitably have fallout for the compiler's ability to analyze and optimize. So I am not a fan of this approach either (at least in principle, I won't rule out that there's some twist that is relatively simple and brings much advantage.) PS: That can easily happen in overly general code spread over multiple functions. But aside from the specifics, it doesn't sound very appealing to rule out optimizations just because they sound like good code shouldn't need them.
https://internals.rust-lang.org/t/role-of-ub-uninitialized-memory/5399
CC-MAIN-2017-34
refinedweb
2,732
60.14
At first glance, Swift looked different as a language compared to Objective-C because of the modern code syntax. Secondly, the strict and strong type system is also different. This talk takes an under-the-hood deep dive into the Swift type system’s structure and gives tips how to use it in a proper way. Introduction I’m Manu, and I will talk to you about data types today. Swift is pretty simple compared to other languages, and Swift is pretty clear in its architecture of the typing system. It’s All About Types There are two things to know first. First, Swift says there shall not be root - $noRoot. And we will never have nil. There’s no specific root type in Swift. For example, in other languages like Java, or C#, or static type systems, they have a root type. Swift does not have a root type, and empty values are not allowed. In Java, an Integer is just a number. The inheritance line is not long, but we do have two stages of inheritance over the type itself, with interfaces, which conforms to a protocol. Objective-C is similar, as it also conforms to a protocol. Get more development news like this In Swift, Int is a type of itself, and it defines its data types as structures, not classes. Dive into Types In Swift, everything is a structure with no inheritance. The reason this is done is so we can have super loose coupling among each type, and can be extended throughout the whole type system. This allows for clean architecture. Named Types Named types are types that have names. It’s classes, structures, Enums, and protocols; essentially everything you are giving a name when you are using it. Compound Types A compound type is a type with no name. The two types of compound types are tuples and functions. Tuples var greatTuple = (x: 10, y: 20) greatTuple = (x: 12, y: 50) greatTuple = (0,0) print (greatTuple.0 + "==" + greatTuple.x) This greatTuple variable has exactly the type Int, Int. This is a real type, it just looks like a bit different. You can attach, or assign a different Int, Int tuple. To do so, leave the parameters, and use the numbers themselves. You can enter the values with the .0, and .1, and so on. Or, you can use the parameter names again. greatTuple = (xCoord: 10, yCoord: 20) greatTuple = (x: "manu", y: 20) Let’s continue with tuples, and where they are really meaningful. func randomColorScheme () -> (fg: UIColor, bg: UIColor) { let foreground = randomForegroundColor() let background = randomBackgroundColor() return (foreground, background) } let colorScheme = randomColorScheme() view.backgroundColor = colorScheme.bg Suppose we want to return more than one value from functions. This is done by just defining the types in the return as a tuple. Tuple types are unnamed but you can give them a name. With the typealias, you can do exactly this. var point = (13, 12) typealias Point = (Int, Int) var myPoint = Point(10, 12) typealias Point = (x: Int, y: Int) var myPoint = Point(x: 10, y: 12) var myPoint = Point(10, 12) Here, we are defining a tuple of just two Ints. There is a variable named Point. Function Types Like tuples, functions types are also common. func processPerson (withID: Int) -> () {} func processVIP (withID: Int) {} struct Person { firstname : String lastname : String } func nameForPerson (withID: Int) -> (name: String, age: Int) {} func nameForPerson (withID: Int) -> (Person) {} func nameForPerson (withID: Int) -> Person {} func detailsForPerson (withID: Int) -> (obj: Person, age: Int) {} If I want to return nothing, you can get by with writing nothing. The thing exactly after the name of the function, until the first bracket, is the function type. Variadic Parameters A lot of programming languages actually have this. func printAll(_ numbers: Int...) -> Int { for number in numbers { print ( "the number \(number)") } } I have four parameters for this function, and they are all Int. As soon as you add three dots after the type identifier here, you can use this variable as an array. Treat it like an array and just use it. Closures Closures are very similar to blocks in Objective-C with a few differences. let names = ["Igor", "Horst", "Manu", "Alex", "David"] func ascending (_ s1: String, _s2: String) -> Bool { return s1 < s2 } var ascendingNames = names.sorted(by: ascending) A closure is actually a higher usage of a function type. Above, we just have two types of types; the closure is defined quite similarly to the function itself. This is because it takes the parameters and the area and the return type just as the type signage. This makes functions and closures siblings. Blocks vs. Closures The syntax of blocks and closures in Objective-C was often difficult for me to remember, this is why I want to show you the following two URLs. Block Syntax and Closure Syntax The Sugar On Top We have sugar on the top in Swift - Optionals. Optionals in Objective-C are not that easy to understand, considering the following definition. public enum ImplicitlyUnwrappedOptional<Wrapped> : ExpressibleByNilLiteral { case none case some(Wrapped) public init (_ some: Wrapped) public init(nilLiteral: ()) } This is just an Enum, which has something in it, or nothing. With an optional, you can do a lot: var myName : Optional<String> myName = "manu" var theName = "manu" if myName != nil && theName != nil { print(myName) print(theName) } An optional is something or nothing, the value that is wrapped in the optional is your “safety net”. We want to print an optional, and Swift does exactly this. It prints our optional. To forcibly unwrap an optional, you can use a bang operator. This is done like this. var myName : String? = "manu" print(myName) print(myName!) if let name = myName { print(name) } If there is no value in the optional, it will fail, and your program will crash. I recommend to not force unwrap things. Instead, you can do it conditionally, or do the following: var name : String? "manu" guard let myName = name else { print("empty name - can't proceed") return } print(myName) We are defining an optional, like before; this is just a string which I give a value to, and only continues after the guard block if myName contains a value.
https://academy.realm.io/posts/altconf-2017-manu-rink-secret-life-of-types-in-swift/
CC-MAIN-2018-13
refinedweb
1,031
65.12
The? In this article, I’ll dive into that. We’ll be using OAuth2 and the Windows Azure Access Control Service to secure our API yet provide access to all those apps out there. Why would I need an API? A couple of years ago, having a web-based application was enough. Users would navigate to it using their computer’s browser, do their dance and log out again. Nowadays, a web-based application isn’t enough anymore. People have smartphones, tablets and maybe even a refrigerator with Internet access on which applications can run. Applications or “apps”. We’re moving from the web towards apps. If you want to expose your data and services to external third-parties, you may want to think about building an API. Having an API gives you a giant advantage on the Internet nowadays. Having an API will allow your web application to reach more users. App developers will jump onto your API and build their app around it. Other websites or apps will integrate with your services by consuming your API. The only thing you have to do is expose an API and get people to know it. Apps will come. Integration will come. A great example of an API is Twitter. They have a massive data store containing tweets and data related to that. They have user profiles. And a web site. And an API. Are you using to post tweets? I am using the website, maybe once a year. All other tweets come either from my Windows Phone 7’s Twitter application or through, a third-party Twitter client which provides added value in the form of statistics and scheduling. Both the app on my phone as well as the third-party service are using the Twitter API. By exposing an API, Twitter has created a rich ecosystem which drives adoption of their service, reaches more users and adds to their real value: data which they can analyze and sell. API characteristics An API is simply a software-to-software interface, defined by whoever is exposing the API to public or private users. It’s a programming contract between system components, defining how these components interact with each other. It defines constraints, both technical as well as legal. Twitter for example defines a usage constraint: if you are using their API without paying you will be limited to a certain number or requests. Modern web API’s are built around the HTTP protocol. This makes it easy to expose your API: the web has the infrastructure ready to access your API. As a matter of fact, any web site has been using this infrastructure for over a decade to expose its services in your browser. Your browser is simply an API client built on top of HTTP. Here’s an example HTTP request to (issued using the excellent Fiddler2 web debugger, hence the User-Agent value you’re seeing): GET HTTP/1.1 User-Agent: Fiddler Accept: text/html, */* Host: Google responds to this request with the HTML contents you all know and use. There are two very interesting concepts in this request: the HTTP verb (in this case: GET) and the Accept header (in this case text/html, /). Both define what the client wants from the server: GET tells the server to simply send data as a response, text/html, / tells the server that the response should preferably be in the HTML format, but optionally any other format will work for the client too. We can inform the server of what we intend it to do using one of the standard HTTP verbs: * GET – return a resource, like HTML, images or an employee record * HEAD – check if the resource exists but don’t return it * POST –update an existing resource * PUT – create a new resource * MERGE – merge values with an existing resource, like adding a property to an employee recordDELETE – delete data resource There are more verbs if you like, but these are the most widely used. Let’s have a look at Google’s response as well: HTTP/1.1 200 OK Connection: Keep-Alive Content-Type: text/html; charset=ISO-8859-1 <!doctype html><html itemscope="itemscope" itemtype=""><head><meta itemprop="image" content="/images/google_favicon_128.png"><title>Google</title> <!-- more HTML and JavaScript --> </html> Interesting here is the first line: Google responds with a 200 status code. 200 means that everything is okay: the resource has been found at the location specified by the client and it will be returned as part of the HTTP response. In this case, the resource is the HTML and JavaScript markup that makes up the Google homepage. There’s a large number possible status codes you can use. Here are some you will most commonly encounter: - 200 OK – Everything is OK, your expected resource is in the response. - 201 Created – The resource has been created successfully - 401 Unauthorized – You either have to log in and/or you are not allowed to access the resource. - 404 Not Found – The resource could not be found. - 500 Internal Server Error – The server failed processing your request. - 503 Service Unavailable – The service is unavailable at this time There’s a theme in these status codes. 1XX are informational. 2XX codes mean “successful”. 3XX tell you to go elsewhere, like our 302 example above. 4XX means the request the client sent cannot be completed because the resource wasn’t found, authentication is required or something else.5XX means the server has had a problem, like a configuration issue or a database connection failure resulting in the feared error 500 – Internal Server Error you see on some websites. Building an API in ASP.NET Along withASP.NET MVC 4, Microsoft also ships a framework for building Web API’s based on HTTP and REST in .NET. The ASP.NET Web API framework looks similar to ASP.NET MVC in that it has controllers, routes, filters and all other great features to build your API (explained on the ASP.NET website if you need a refresh). In this article, I’m not going to dive deep into the internals of ASP.NET Web API but I do want to give you a feel on how you can build an API with it. Here are four basic conventions for ASP.NET Web API: - Requests have an HTTP verb defined. This maps to the API controller’s action method. - Requests have an Accept header. This is handled by ASP.NET Web API’s MediaTypeFormatter and will transform the request to your controller from JSON, XML or whatever format you want to add as a MediaTypeFormatter. - Responses have an HTTP status code. - Responses are formatted by ASP.NET Web API’s MediaTypeFormatter into JSON, XML or whatever format you want to add as a MediaTypeFormatter. Here’s a sample API controller: public class EmployeeController : ApiController { public Employee Get(int id) { return new Employee { Name = "Maarten" }; } public void Post(Employee employee) { if (employee == null) { throw new HttpResponseException( new HttpResponseMessage( HttpStatusCode.BadRequest) { ReasonPhrase = "employee should be specified" }); } // ... store it in the database ... } } Our API now has one controller, EmployeeController, which accepts both the HTTP GET and POST methods. Get() simply returns an employee resource, which is formatted as JSON or XML based on the Accept header the client provided. Here’s a sample request/response: Request: GET HTTP/1.1 User-Agent: Fiddler Accept: application/json Host: localhost:3476 Response: HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 Date: Wed, 18 Jul 2012 08:53:56 GMT Content-Length: 29 { "name": "Maarten" } The Post() method (triggered by the HTTP POST verb) creates a new employee in our database. If no employee is posted, we respond with an HTTP status code 400 Bad Request to inform the client that he really has to specify the employee parameter. Easy? Thought so. Getting to know OAuth2 If you decide that your API isn’t public or specific actions can only be done for a certain user (let that third party web site get me my tweets, Twitter!), you’ll be facing authentication and authorization problems. With ASP.NET Web API, this is simple: add an [Authorize] attribute on top of a controller or action method and you’re done, right? Well, sort of… When using the out-of-the-box authentication/authorization mechanisms of ASP.NET Web API, you are relying on basic or Windows authentication. Both require the user to log in. While perfectly viable and a good way of securing your API, a good alternative may be to use delegation. In many cases, typically with public API’s, your API user will not really be your user, but an application acting on behalf of that user. That means that the application should know the user’s credentials. In an ideal world, you would only give your username and password to the service you’re using rather than just trusting the third-party application or website with it. You’ll be delegating access to these third parties. If you look at Facebook for example, many apps and websites redirect you to Facebook to do the login there instead of through the app itself. Before I dive into OAuth2, I want you to remember one sentence from the previous paragraph: “your API user isn’t really your user, but an application acting on behalf of a user”. This sentence summarizes what OAuth2 tries to solve: provide support for the scenario where a user can grant an application access to resources owned or accessible by that user. From :. In OAuth2, we can define the following actors: * Client – The application wanting to act on behalf of the user * User – The user who wishes to grant the Client to act on his behalf * Authorization server – A trusted system in which the User can grant a Client privileges to access one or more resources with a specific scope during a specific timeframe. These claims will be described in an access token which is signed by this trusted system. * Resource server – The system (or web API) the Client wishes to access OAuth2 supports many authentication flows, often depending on the client type (e.g. is it running inside a browser or on a mobile device). The flow I’ll be using throughout the rest of the article is one which allows other websites to consume your data and services and roughly looks like the following: When a Client wants to use the Resource server, it has to request the Authorization server for an Access token. The Client presents its client_id and its intentions (or scope) For example: “I want to be able to access your timeline”. The Authorization server asks the user to login and to allow or disallow the Client and the requested scope. When approved, the Authorization server sends the client a key or code which the Client can then exchange, together with the client_secret, to request an access_token and a refresh_token. The access_token received can be used to access the Resource server or API. It contains a set of claims (you can access user X’s timeline between 9AM and 10AM on the 1st of January). The token is signed using a signature which is known and trusted by the Resource server. The Resource server can use these claims to gather the correct information and allow/disallow the requested grants for a specific resource. An access_token can expire. Whenever the Resource server receives an expired access_token from your client, it should not accept the claims presented in the accesstoken and potentially deny access to a resource. Luckily there’s also the refresh_token: this token can be used to refresh the accesstoken without requiring user action. Windows Azure Access Control Service One of the interesting components in the Windows Azure platform is the Access Control Service (ACS). ACS allows you to outsource your authentication and authorization woes and have Microsoft handle those. At, an application me and a colleague have been working on, you’ll find that you can log in through a variety of identity providers (Windows Live ID, Google, Facebook, ADFS, …). We don’t have to do anything for that: ACS solves this and presents us with a set of claims about the user, such as his Google e-mail address. If we want to add another identity provider, we simply configure it in ACS and without modifying our code, you can login through that new identity provider. Next to that, ACS provides a little known feature: OAuth2 delegation support. The idea with that is that your application’s only job is to ask the user if a specific application can act on his or her behalf and store that decision in ACS. From then on, the client application will always have to go to ACS to fetch an access token and a refresh token which can be presented to your API. Here’s a sample flow for an application hosted at: This approach comes in very handy! Every client application will only have to ask our Authorization server once for user consent, after which ACS will take care of handing out access tokens, expiring tokens, renewing tokens and so on. ACS handles the entire authentication and authorization load for us, even with 1 billion apps and users consuming my API. And all of that for just 19 US$ per million actions on ACS (see pricing calculator). Consuming an API protected using OAuth2 Let’s step out of the dry theory. For a demo, I’ve been working on an application which is online at. It’s a website in which people who share my hobby, brewing beer, can keep track of their recipes and brews as well as share recipes with others. From a business point of view, BrewBuddy would like to be able to expose a user’s list of recipes through an API. That way, third party websites or apps can use these recipes to enhance their own service. An ecosystem of apps and websites integrating with BrewBuddy may come alive. A simple website called MyBrewRecipes wants to integrate with BrewBuddy and fetch the recipes for a user. It’s a pretty simple website today, but they want to be able to enrich their own experience with recipes coming from BrewBuddy. A user of MyBrewRecipes can choose to download his list of recipes from BrewBuddy and use them in MyBrewRecipes. After navigating to MyBrewRecipes and clicking a link saying “import recipes from BrewBuddy”, the user is redirected to and asked to log in. After logging in, the user has to give consent and allow or disallow MyBrewRecipes to fetch data from BrewBuddy on the user’s behalf: After granting access, I’m redirected back to MyBrewRecipes and a simple list of all my recipes stored in BrewBuddy is fetched from the BrewBuddy API: A lot of things happened behind the scenes though. Here’s the flow that took place: - The User visited MyBrewRecipes and clicked a link there to fetch recipes from BrewBuddy - MyBrewRecipes redirected the user to BrewBuddy’s Authorization server - BrewBuddy’s Authorization server asked for user consent (yes/no) and when clicking yes, BrewBuddy created a delegation for that user in Windows Azure ACS. BrewBuddy redirects the user back to MyBrewRecipes with an authorization code in a parameter. - MyBrewRecipes requested an access token and a refresh token with Windows Azure ACS using authorization code received from BrewBuddy. - MyBrewRecipes accessed the BrewBuddy API, passing the access token.In this case, the access token contains several claims: MyBrewRecipes can access maarten’s(user) recipes (scope) for the next 15 minutes (expiration) - BrewBuddy’s API validated the token (a simple hash check) and when valid, returned the recipes for the user who is using MyBrewRecipes. If after this MyBrewRecipes wants to use the API again on behalf of our User, that site would simply fetch a new access token from Windows Azure ACS without having to bother the BrewBuddy website again. Of course, for that to work, MyBrewRecipes should store the refresh token on their side. If they don’t, their only way to consume BrewBuddy’s recipes is by going through the entire flow again. Sounds complicated? It isn’t. It’s a fairly simple flow to use. You could build all of this yourself, or delegate the heavy lifting on the server side to Windows Azure ACS. On the client side, a number of frameworks for a variety of programming languages (.NET, PHP, Ruby, Java, Node, …) exist which make this flow easy to use on the client side. Your API consumers. Building an API protected using OAuth2 (and Windows Azure ACS) On the server side of the API, your application, building the API would be as easy as creating ApiController derivates using the ASP.NET Web API framework. I’ve been working on a GitHub project called WindowsAzure.Acs.Oauth2 (also available on NuGet) which makes setting up your API to use Windows Azure ACS for OAuth2 delegation very easy. Let’s start with the API. In BrewBuddy, the API serving a user’s recipes looks like this: [Authorize] public class RecipesController : ApiController { protected IUserService UserService { get; private set; } protected IRecipeService RecipeService { get; private set; } public RecipesController(IUserService userService, IRecipeService recipeService) { UserService = userService; RecipeService = recipeService; } public IQueryable<RecipeViewModel> Get() { var recipes = RecipeService.GetRecipes(User.Identity.Name); var model = AutoMapper.Mapper.Map(recipes, new List<RecipeViewModel>()); return model.AsQueryable(); } } Fairly simple: we return a list of recipes for a given user’s username. This username is provided by the accesstoken BrewBuddy’s authorization server issued to our client and which that client is sending along with its API calls into BrewBuddy. By installing the WindowsAzure.Acs.Oauth2 assembly into our project, the OAuth2 complexity is handled for you and all the parsing and validating of the accesstoken is taken care of. After installing the NuGet package into your project, some dependencies will be installed into your ASP.NET Web API project as well as some new source files: App_Start/AppStart_OAuth2API.cs- Makes sure that the OAuth2 accesstoken is transformed into a ClaimsIdentity for use in your API. This ensures that in your API, calling User.Identity.Name returns the user on whose behalf the client application is calling your API. Next to that, other claims such as e-mail, scope or whatever your authorization server stored in the accesstoken are available as claims in this identity. On a side note, it currently only handles Simple Web Tokens (SWT) as the type of access token supported. Make sure you configure the ACS relying party to make use of SWT when using WindowsAzure.Acs.Oauth2. Controllers/AuthorizeController.cs– An authorization server implementation which is configured by the settings in Web.config (explained later on). You can override certain methods here, for example if you want to show additional application information on the consent page. Views/Shared/_AuthorizationServer.cshtml- A default consent page. This can be customized at will. For example, the page BrewBuddy showed on which the user had to allow or deny access was served by the AuthorizeController and this view. Next to these files, 6 appSettings entries are added to your Web.config. I would recommend encrypting these in your configuration files to ensure security. These settings are everything required to get OAuth2 up and running: <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="WindowsAzure.OAuth.SwtSigningKey" value="[your 256-bit symmetric key configured in the ACS]" /> <add key="WindowsAzure.OAuth.RelyingPartyName" value="[your relying party name configured in the ACS]" /> <add key="WindowsAzure.OAuth.RelyingPartyRealm" value="[your relying party realm configured in the ACS]" /> <add key="WindowsAzure.OAuth.ServiceNamespace" value="[your ACS service namespace]" /> <add key="WindowsAzure.OAuth.ServiceNamespaceManagementUserName" value="ManagementClient" /> <add key="WindowsAzure.OAuth.ServiceNamespaceManagementUserKey" value="[your ACS service management key]" /> </appSettings> </configuration> These settings should be configured based on your Windows Azure Access Control settings. Instructions can be found on the GitHub page for WindowsAzure.Acs.Oauth2. Conclusion Next to your own website, apps are becoming more and more popular as an alternative manner to consume your data and services. Third-party web sites may also be interested in enriching their experience with the data and services you have to offer. Why not use that as a lever to reach more users? By exposing an API, you’re giving third party app developers the opportunity to interface with your services and at the same time, they are the advocate of them. Embrace them, give them a good API. Of course, that API should be protected. OAuth2 is becoming the de-facto standard for that but requires some server-side coding on your part. If you just want to focus on the API and delegate the heavy lifting and scaling of the OAuth2 protocol, you may as well delegate it to the Windows Azure Access Control Service. WindowsAzure.Acs.Oauth2, an unofficial open source project leveraging the OAuth2 delegation features in ACS, will help you with that.
https://www.developerfusion.com/article/147914/protecting-your-aspnet-web-api-using-oauth2-and-the-windows-azure-access-control-service/
CC-MAIN-2018-47
refinedweb
3,492
62.68
ASP.NET # MVC # 7 – Call Method on controller from JavaScript function / Call ASP.NET MVC Controller method from JavaScript function Hi Friends, As We know Asp.net mvc divides the web form in three different parts [Model,View,Controller] , as we saw in Model-View-Controller in our post. Asp.net mvc doesn’t provide server side events as in Asp.net , we usually come across the situations where we need to call the server side method from the client side while using Asp.net mvc . For Example : 1) Call some server side method after selected index change event of a dropdown list. 2) Call Server side method on Change of a radio button. To call the Server side method from the JavaScript we use the $.Ajax(url,[options]) method of Ajax for this you need to have the reference of jquery-1.4.1.min.js , MicrosoftAjax.js , MicrosoftMvcAjax.js in the view like following references <script src="../../Scripts/jquery-1.4.1.min.js" type="text/javascript"></script> <</script> <</script> For Example we have a dropdownlist ddlTest and on selected index change event of it we are calling the JavaScript method called onDropdownChange . <%=Html.DropDownList("ddlDept", new SelectList(Model.lstEmployee, "Emp_Number", "First_Name", 0), "Select", new { @onchange = "onDropdownChange(this);" })%> //JavaScript MEthod as follow The $.ajax function accepts firs parameter as URL, where we have supplied value as “Employee/ServerMethodName” , it means that this function will call the method server side method ServerMethodName which is in the controller Employee as follows. public class EmployeeController : Controller { public void ServerMethodName() { //Put your logic here } } For More on Microsoft technologies visit our site Dactolonomy of WebResource Thanks.
https://microsoftmentalist.wordpress.com/2011/09/05/asp-net-mvc-7-call-method-on-controller-from-javascript-function-call-asp-net-mvc-controller-method-from-javascript-function/
CC-MAIN-2018-09
refinedweb
274
55.34
I have an issue when uploading the file into data source. Requirement: On click of a button, I need to get a XMLP report based on the template. Steps followed: 1. On click of a button I created the XML file using Peoplecode and when checked using win message I get the xml perfect like below with start and end tags: <?xml version="1.0"?> -<Operational_Details> <POSITION_NBR>xxxxxx</POSITION_NBR> <EFFDT>2019-01-17</EFFDT> <REG_REGION>xxx</REG_REGION> </Operational_Details> 2. I go to data source to register with XML as the data source type. I click on the upload and load the same sample XML as above. 3. When I again click on the XML to check if the XML is right, I get the XML like below and because of which an error comes when running the report. <?xml version="1.0" encoding="UTF-8"?> <Operational_Details><POSITION_NBR/><EFFDT/><REG_REGION/> Could anyone help me out what is wrong here and why is it coming like that if I need to successfully run my report from Peoplecode instead of App engine? I need the report in pdf format in a new tab. Is that really an image exactly as you see, or a posting error? If it is overprinting as I see it in your post, I think it might be that there are CR (Carriage Return) characters in the XML. Whatever is reporting the output is backspacing between each attribute, so the attributes overlap. I see you are using "win message". Are you mixing Windows and Unix/Linux environments? Hello Peter, I have attached the screenshot now for your reference. No screenshot visible. In any case, putting something on screen means all the control characters have already been interpreted by the window manager. If you are on Linux (which you don't tell me) and have a command window, the kind of output that helps would be the text that comes out of: od -t acx1 myFile.xml OP, I have developed many reports in the fashion you want. I, myself don't like the idea of creating an App Engine just so I can produce a report. The code below describes how to execute an XML Publisher report with button click action by an user: I am using PS/Query as my data source. I would recommend this method; not that it's better or worse, just ease of use. I've used the code below to develop at least 15 reports and they are working fine :-) Good luck! import PSXP_RPTDEFNMANAGER:*; &sRptDefn = "Your report defn"; &sTmpltID = "Your report template defn"; &sLangCd = %Language_User; &AsOfDate = %Date; &oRptDefn = create PSXP_RPTDEFNMANAGER:ReportDefn(&sRptDefn); &oRptDefn.Get(); &rcdQryPrompts = &oRptDefn.GetPSQueryPromptRecord(); &rcdQryPrompts.Your prompt field.Value = Value1; /* repeat above for all prompt fields */ If Not &rcdQryPrompts = Null Then &oRptDefn.SetPSQueryPromptRecord(&rcdQryPrompts); &oRptDefn.ProcessReport(&sTmpltID, &sLangCd, &AsOfDate, "PDF"); &oRptDefn.Publish(" ", "", "", 0); DoSaveNow(); &oRptDefn.displayoutput(); // Displays the output in a new window instead of navigating to the report manager etc Else Error ("Query is missing prompt values."); End-If; Thank you The issue is resolved
https://it.toolbox.com/question/issue-with-xml-when-uploading-in-the-data-source-011719
CC-MAIN-2020-10
refinedweb
506
65.01
- 27 Jan, 2014 5 commits This gives us a very tiny speedup, as size() was a const method already. - 10 Jan, 2014 2 commits - 09 Jan, 2014 1 commit - 08 Jan, 2014 2 commits - 07 Jan, 2014 2 commits - 06 Jan, 2014 3 commits - 02 Jan, 2014 4 commits Completes initial implmementation of example hooks libary, user_chk. Removed reponse packet type check from IPv6 packet_snd callout. Corrected text in user chk doxygen page. - 31 Dec, 2013 3 commits Addressed review comments which were largely minor. Limited use of extern C linkage to only the callout functions themselves. Added a dox page describing the library. Added namespace user_chk. 3207 was created quite some time ago, so master was merged into it first. - 30 Dec, 2013 4 commits The obsolete usage of AC_OUTPUT was actually causing config.status to be executed multiple times, which is a big waste of time. Changed to use the correct AC_CONFIG_FILES and AC_CONFIG_COMMANDS macros instead. - 19 Dec, 2013 3 commits - 17 Dec, 2013 7 commits Conflicts: src/bin/d2/d2_messages.mes Fixed two missed minor cleanups from the initial review. Changes are largely clean up and commentary. Some files were using #include <full/path/to/header> syntax which is not desirable as the directory may move about in the source tree, and all of the files it was including are local to perfdhcp so they should be using #include "filename.h". Recent changes to the perfdhcp tool introduced a new breakage when configuring with srcdir != objdir. These were introuced by the code using #include <tests/tools/perfdhcp/SOME_FILE.h> but there were no -I flags set to allow for including files from the top level. - 16 Dec, 2013 2 commits Conflicts: ChangeLog doc/guide/bind10-guide.xml - flag => configuration parameter - echoClientId test uses ASSERTs rather than EXPECTs - 13 Dec, 2013 2 commits
https://gitlab.isc.org/isc-projects/kea/-/commits/9eedd903b78552493ea8fd55f89b8a58b94398d1
CC-MAIN-2022-05
refinedweb
305
66.03
CTL_MBOXLIST(8). OPTIONS -C config-file Read configuration options from config-file. )i‐ tiative, that is, only change the mupdate server, do not delete any local mailboxes. USE THIS OPTION WITH CARE, as it allows namespace collisions into the murder. -w When used with -m, print out what would be done but do not per‐ form par‐ tition (directory containing a valid cyrus.header file) and not present in the database will be reported. Note that this func‐ tion is very I/O intensive. -f filename Use the database specified by filename instead of the default (configdirectory/mailboxes.db). FILES /etc/imapd.conf SEE ALSO imapd.conf(5), cyrus-master(8) CMU Project Cyrus CTL_MBOXLIST(8)[top]
http://www.polarhome.com/service/man/?qf=ctl_mboxlist&tf=2&of=Oracle&sf=8
CC-MAIN-2020-10
refinedweb
117
60.01
ReDim Statement (Visual Basic) Reallocates storage space for an array variable. - Preserve Optional. Modifier used to preserve the data in the existing array when you change the size of only the last dimension. - name Required. Name of the array variable. See Declared Element Names. - boundlist Required. List of bounds of each dimension of the redefined array. You can use the ReDim statement to change the size of one or more dimensions of an array that has already been declared. If you have a large array and you no longer need some of its elements, ReDim can free up memory by reducing the array size. On the other hand, if your code determines that an array needs more elements, ReDim can add them. The ReDim statement is intended only for arrays. It is not valid on scalars (variables containing only a single value), collections, or structures. Note that if you declare a variable to be of type Array, the ReDim statement does not have sufficient type information to create the new array. You can use ReDim only at procedure level. This means the declaration context for a variable must be a procedure, and cannot be a source file, namespace, interface, class, structure, module, or block. For more information, see Declaration Contexts and Default Access Levels. Rules Modifiers. You can specify only the Preserve modifier, and you cannot omit the ReDim keyword if you do so. Multiple Variables. You can resize several array variables in the same declaration statement, specifying the name and boundlist parts for each one. Multiple variables are separated by commas. Array Bounds. Each entry in boundlist can specify the lower and upper bounds of that dimension. The lower bound is always zero, whether you specify it or not. The upper bound is the highest possible value for that subscript, not the length of the dimension (which is the upper bound plus one). Each subscript can vary from zero through its upper bound value. The number of dimensions in boundlist must match the original rank of the array. Empty Arrays. It is possible to use -1 to declare the upper bound of an array dimension. This signifies that the array is empty but not Nothing (Visual Basic). For more information, see How to: Create an Array with No Elements. However, Visual Basic code cannot successfully access such an array. If you attempt to do so, an IndexOutOfRangeException error occurs during execution. Data Types. The ReDim statement cannot change the data type of an array variable or of its elements. Initialization. The ReDim statement cannot provide new initialization values for the array elements. Rank. The ReDim statement cannot change the rank (the number of dimensions) of the array. Resizing with Preserve. If you use Preserve, you can resize only the last dimension of the array, and for every other dimension you must specify the same bound it already has in the existing array. For example, if your array has only one dimension, you can resize that dimension and still preserve all the contents of the array, because you are changing the last and only dimension. However, if your array has two or more dimensions, you can change the size of only the last dimension if you use Preserve. Properties. You can use ReDim on a property that holds an array of values. Behavior Array Replacement. ReDim releases the existing array and creates a new array with the same rank. The new array replaces the released array in the array variable. Initialization without Preserve. If you do not specify Preserve, ReDim initializes the elements of the new array to the default value for their data type. Initialization with Preserve. If you specify the Preserve modifier, Visual Basic copies the elements from the existing array to the new array. The following example increases the size of the last dimension of a dynamic array without losing any existing data in the array, and then decreases the size with partial data loss. Finally, it decreases the size back to its original value and reinitializes all the array elements. The first ReDim creates a new array which replaces the existing array in variable intArray. ReDim copies all the elements from the existing array into the new array. It also adds 10 more columns to the end of every row in every layer and initializes the elements in these new columns to 0 (the default value of Integer, the element type of the array). The second ReDim creates another new array, copying all the elements that fit. However, five columns are lost from the end of every row in every layer. This is not a problem if you have finished using these columns. Reducing the size of a large array can free up memory that you no longer need. The third ReDim creates still another new array, removing another five columns from the end of every row in every layer. This time it does not copy any existing elements. This reverts the array to its original size and returns all its elements to their original default value.
http://msdn.microsoft.com/en-us/library/w8k3cys2(v=vs.80).aspx
CC-MAIN-2013-48
refinedweb
841
55.64
Jase2985: why not do it the way that has been suggested? use the current device to do the ADSL connection, the NAT and the DHCP and just let the Apple device do the wireless etc? It seems that you can't assign more than 5 sticky DHCP addresses in his modem, and wants to use more than 10. I suspect that just assigning lease times of 24 hours would solve it, but the OP seem more comfortable configuring the Apple gear, so probably just easier that way. DV130 @ PB Tech - showing 4 in stock at North Shore. FAT, where are you, I am just finishing water blasting the house, and cutting everyones sneakers in 2, then I can come pay/collect! Will send you a PM, I'm on the Shore, fwiw. I have a Draytek Vigor 130 (DV130) I can sell you if you are still after one. I used it with my AirPort Extreme's along with my ADSL connection - pretty seamless. No longer need as I have moved to Fibre 2 Queries: 1. Will the setup just pop the DV before the HG659B and the modem/router Airport Extreme? So the DV130 does authentication or whatever, the HG659B dos the VLAN10 tagging and the Apples just do the dhcp/nat like before, they gave addresses that never suffered any conflicts... Vodafone ADSL Input ----------> RJ45 Port DV130 EXIT Ethernet LAN Port ----------> Ethernet WAN Port HG659B (WiFi OFF) Ethernet LAN1 ----------> WAN Port AEBS (WiFi ON) AEBS LAN1 ----------> WAN Port 'Sure-Signal AEBS LAN2 ----------> WAN Port Apple TV AEBS LAN3 ----------> WAN Port Apple Time Capsule Base Station (WiFi ON) AEBS LAN4 ----------> WAN Port Apple Desktop Computer 2. Someone wonders, if there come sort of IP issue, outside of my control, and says ask for them to give the house a static Ip, and check stuff properly at their end, how am I best to go about this, its really difficult to get quality assistance w/this ISP? Again, I should have said thank you more clearly, and come here for help earlier. h True eh? Sweet, thats seriously hot bananas, so to be clear, DV130 can do authentication & VLAN10 tagging on the WAN side of things, yeah, send my internet to the modem/router which is simply h-aebs? Things are looking up if this is the case, I will, summarily, execute the HG659B out of principle. Starting to feel happy, some cut sneakers, buggar. As your simply on adsl, you dont need to worry about vlan tagging at all. Your simply going to be doing a PPPoA to PPPoE translation setup which the DV130/120 is capable of. #include <std_disclaimer> Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have. 1) Set the DV130 to PPPoA-PPPoE bridge mode (forget the exact name they call it) 2) Plug LAN DV130 to WAN AEBS 3) Configure WAN (Internet) of AEBS to use PPPoE connection. Enter your Vodafone name/password in the AEBS PPPoE settings. Leave the "service name" field empty. 4) Enable DHCP/NAT on the AEBS In this setup, the DV130 is taking the PPPoATM ADSL connection, and turning it into a PPPoEthernet connection that the AEBS understands. The AEBS is your main router, doing the authentication, and providing NAT/DHCP. Because of the PPPoA-PPPoE translation, you have no requirement for a VLAN10 tag. EDIT:Spelling RunningMan & Co, A lady here, who's just turned up, her name is actually Uuber, she understand implicitly why some of you are Uber Geeks, thank you. Soon as my supplier tehehe gets back to me, I will be grabbing his DV130 and hopefully be watching films, playing music while a plethora of ipcams & whatnot operated silently, seamlessly. Again, kudos to GZ & the Uber Geeks, cheers, H RunningMan, hio77, Spyware, naturally freitasm & of course firefuze, A huge thank you, yesterday between frantically searching under the house, & in the pink batts for something, & running the odd internet cable (They are all too short, need 3 x 15m ethernet cables, or I might jemmy up telephone cable to do the job as I am cashless) well between doing all his poorly I managed to plug one of the aforementioned manky telephone cords into the little DV130 (its so awesome firefuze) popped an ethernet cable from it,to my old Apple & waited... while waiting I noted the DV's menu's etc, wow its a very good product, I (who I again must add who knows very very little) still recognised it as a quality bit of kit immediately - bit more waiting... reset on the AEBS & she (all) ran... perfectly too I might add, today IP's assigned are all still the same, this is with very very little setup, virtually none - I am sure, you guys would tune it, make it faster whatnot but its just humming, 'click, snap, just instant' - best of all, its still all working. Only thing (you will laugh) not working is the sureSignal, likely hunting for 192.168.1.x but stuff is all at 10.0.0.x. Is the Enterprise sure/Signal much better, or back to a Cel-Fi rip-off... or maybe just finding out how to setup the VSSignal suitably (ahhh, heres something online, just that I do not trust these instructions anymore).... Just need some Ethernet cables now, like a spiders web in here, and I can put the house back together & get a flatmate lol This was, as usual, an extremely simple problem, but it cost me literally, 100's of hours, millions of brain cells, typical;ly simplicity being the hardest thing to achieve... Common sense, the least commonly used sense by humans, As we stand.. See my Spiders Web.. thats the ATC, leads to the brekkie bar and joins the sureSignal & appleTV heading inside the AEBS via LAN ports, AEBS WAN ----> DV130 perfection. Good result With the sure signal, try turning it off for a few minutes, make sure it is connected with an ethernet cable to a LAN port on the first AEBS (the one connected to the DV130) and then give it time - as I understand it, they can take quite some time to come online when they see a new network or location. great news! as above, give the sure signal a little while to settle in. if it still isnt happy by tomorrow, have a word with @johnr and he will likely be able to indicate the best method to go through to make it happy again. As for the siderweb, considered floor mounting instead? being the tallest in the house, i often have i liking for cables to be low rather than... in my face! #include <std_disclaimer> Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have.
https://www.geekzone.co.nz/forums.asp?forumid=40&topicid=191334&page_no=2
CC-MAIN-2018-05
refinedweb
1,144
65.35
Are there any existing libraries to parse a string as an ipv4 or ipv6 address, or at least identify whether a string is an IP address (of either sort)? Yes, there is ipaddr module, that can you help to check if a string is a IPv4/IPv6 address, and to detect its version. import ipaddr import sys try: ip = ipaddr.IPAddress(sys.argv[1]) print '%s is a correct IP%s address.' % (ip, ip.version) except ValueError: print 'address/netmask is invalid: %s' % sys.argv[1] except: print 'Usage : %s ip' % sys.argv[0] But this is not a standard module, so it is not always possible to use it. You also try using the standard socket module: import socket try: socket.inet_aton(addr) print "ipv4 address" except socket.error: print "not ipv4 address" For IPv6 addresses you must use socket.inet_pton(socket.AF_INET6, address). I also want to note, that inet_aton will try to convert (and really convert it) addresses like 10, 127 and so on, which do not look like IP addresses.
https://codedump.io/share/vf73kfDsWgZB/1/checking-for-ip-addresses
CC-MAIN-2017-13
refinedweb
174
73.98
What:. I am also intersting parts below (The whole file is two large). Feedback appreciated. -- FG.hs -- Copyright (C) 2005 by Kevin Atkinson under the GNU LGPL license -- version 2.0 or 2.1. You should have received a copy of the LGPL -- license along with this library if you did not you can find -- it at {-| This -} --------------------------------------------------------------------------- -- -- Basic Types -- data FG a b = FG !(FGState -> IO (FG' a b, FGState)) data Event = NoEvent | Event --------------------------------------------------------------------------- -- -- Internal data types -- data FG' a b = FG' !(Control -> a -> IO (Control, b)) data Control = Init | Pending !EventId | Handled !EventId | Done deriving Eq type EventId = Int data AbstrWidget = forall w. WidgetClass w => AbstrWidget w data PendingCallback = PendingCallback !EventId !(Callback -> IO ()) type Callback = IO () data FGState = FGState ![AbstrWidget] !EventId -- Last used callback id ![PendingCallback] --------------------------------------------------------------------------- -- -- Arrow Implementation -- instance Arrow FG where arr = arrFG (>>>) = combFG first = firstFG instance ArrowLoop FG where loop = loopFG arrFG :: (a -> b) -> FG a b arrFG f = FG $ \s -> do let f' c x = return (c, f x) return (FG' f', s) combFG :: FG a b -> FG b c -> FG a c combFG (FG f1) (FG f2) = FG $ \s -> do (FG' f1, s) <- f1 s (FG' f2, s) <- f2 s let f c v = do (c, v) <- f1 c v (c, v) <- f2 c v return (c, v) return (FG' f, s) firstFG :: FG a b -> FG (a,c) (b,c) firstFG (FG f) = FG $ \s -> do (FG' f, s) <- f s let f' c (x, y) = do (c, x) <- f c x return (c, (x, y)) return (FG' f', s) loopFG :: FG (a, c) (b, c) -> FG a b loopFG (FG f) = FG $ \z -> do (FG' f, z) <- f z st <- newIORef undefined let f' Init v = do (Init, (v', s)) <- f Init (v, undefined) writeIORef st s return (Init, v') f' c v = do s <- readIORef st (c, (_, s)) <- f c (v, s) (c, (v', s)) <- f c (v, s) writeIORef st s return (c, v') return (FG' f', z) --------------------------------------------------------------------------- -- -- ArrowDef -- class ArrowDef a where def :: a -- ^Evaluates to a sensible default value. When used as an Arrow, -- ie on the RHS of a @-<@, evaluates to 'init' which takes a -- paramater for the default value, if this parameter is ommited -- the default value is 'def'. instance ArrowDef () where def = () instance ArrowDef [a] where def = [] instance ArrowDef (Maybe a) where def = Nothing instance ArrowDef Event where def = NoEvent instance (ArrowDef a, ArrowDef b) => ArrowDef (a, b) where def = (def, def) --------------------------------------------------------------------------- -- -- AbstractFunction -- -- | An AbstractFunction is either a true function or an Arrow class AbstractFunction f where mkAFun :: (a -> b) -> f a b mkAFunDef :: (a -> b) -> b -> f a b ... --------------------------------------------------------------------------- -- -- Arrow Utilities -- -- -- . -- init, guard :: a -> FG a a init d = FG $ \s -> do let f' Init _ = return (Init, d) f' c v = return (c, v) return (FG' f', s) guard = init -- def :: a -> FG a a instance ArrowDef (a -> FG a a) where def = init -- ArrowDef a => def :: FG a a instance ArrowDef a => ArrowDef (FG a a) where def = init def -- this is not a recursion the 'def' called is a -- different function -- -- |'><' merges two events, taking the value from the signal with an Event, -- if none of the signals have an event than the value is taken from -- the first signal. The case where more than one signal have an event -- should't happen but if it does the value of the first signal is taken -- (><) :: (Event, a) -> (Event, a) -> (Event, a) v >< (NoEvent, _) = v (NoEvent, _) >< v = v v >< _ = v -- -- |'tag' tages an event with a value, throwing away the old value. -- -- can either be used as a function or an arrow -- tag :: (AbstractFunction f) => b -> f (Event, a) (Event, b) tag v = mkAFun (\(e, _) -> (e, v)) -- -- |'hold' creates a value that will hold onto a value until instructed -- to change it. 'hold' is safe to use in a loop context -- hold :: Show s => s -> FG (Maybe s) (Event, s) hold s0 = FG $ \z -> do st <- newIORef s0 let f' c x = do s <- readIORef st case (c, x) of (Init, _) -> return (c, (NoEvent, s)) (_, Nothing) -> return (c, (NoEvent, s)) (_, (Just s')) -> do writeIORef st s' return (c, (Event, s')) return (FG' f', z) -- -- |'arrIO' is like 'arr' except that the function may perform IO -- -- This may be called multiple times during a single event, so be -- careful. It is best only to perform actions with side effects -- during the actual occurrence of the event of interest. -- arrIO :: (a -> IO b) -> FG a b arrIO f = FG $ \z -> do let f' c x = do r <- f x return (c, r) return (FG' f', z) -- -- |'onEvent' will call a function on the value of the event when there -- is any sort of event otherwise it will return a default value. It is -- also safe to in a loop context when used as an arrow. -- onEvent :: (AbstractFunction f) => (a -> b) -> b -> f (Event, a) b onEvent f def = mkAFunDef f' def where f' (NoEvent, _) = def f' (_, v) = f v --------------------------------------------------------------------------- -- -- runFG -- -- | Runs a FG Arrow runFG :: Container () () -> IO () runFG fg = runFG' fg () -- | Runs an FG Arrow with the given input and throws away the return value runFG' :: Container a b -> a -> IO () runFG' (FG f) v = do initGUI window <- windowNew onDestroy window mainQuit containerSetBorderWidth window 10 (FG' f, FGState [AbstrWidget w] _ cbs) <- f $ FGState [] 1 [] let h id = do f (Pending id) ([], v) return () mapM_ (\(PendingCallback id instCb) -> instCb $ h id) cbs widgetShow w f Init ([], v) -- initialize loops h 0 -- set initial state containerAdd window w widgetShow window mainGUI --------------------------------------------------------------------------- -- -- Widget data type -- -- $widget --. -- type Widget p v = FG [p] (Event, v) --------------------------------------------------------------------------- -- -- Properties -- class Text a where -- | The widget label text :: String -> a class Enabled a where -- | If the Widget is enabled, ie can receive user events enabled :: Bool -> a ... --------------------------------------------------------------------------- -- -- Label Widget -- type Label = Widget LabelP () -- ^ -- * doesn't emit any events -- -- * doesn't have any readable properties -- newtype LabelP = LabelP (forall w. LabelClass w => w -> IO ()) labelP (LabelP a) = a instance Enabled LabelP where enabled p = LabelP (enableW p) instance Visible LabelP where visible p = LabelP (visibleW p) instance Text LabelP where text p = LabelP (\w -> labelSetText w p) instance Markup LabelP where markup p = LabelP (\w -> labelSetMarkup w p) label :: [LabelP] -> Label label = widget' (labelNew Nothing) labelP NoEventP noProp --------------------------------------------------------------------------- -- -- Button Widget -- type Button = Widget ButtonP () -- ^ -- * emits an Event when pressed -- -- * doesn't have any readable properties -- newtype ButtonP = ButtonP (forall w. ButtonClass w => w -> IO ()) buttonP (ButtonP a) = a instance Enabled ButtonP where enabled p = ButtonP (enableW p) instance Visible ButtonP where visible p = ButtonP (enableW p) instance Text ButtonP where text p = ButtonP (textWL p) instance Markup ButtonP where markup p = ButtonP (markupWL p) button :: [ButtonP] -> Button button = widget' (widgetWithLabelNew buttonNew) buttonP (EP Event onClicked) noProp --------------------------------------------------------------------------- -- -- Container Widgets -- type Container a b = FG ([WidgetP], a) b -- ^ -- A container simply arranges the widgets of the underlying arrow in -- a fixed fashion. The first input of an arrow is for dynamically -- changing the properties of a container. The second input is passed -- to underlying arrow. The output is the same as the underlying -- arrow. -- hbox, vbox :: [BoxP] -> FG a b -> Container a b hbox = box' hBoxNew vbox = box' vBoxNew ... --------------------------------------------------------------------------- -- -- Generic Widget Implementation -- data EventParm w z = NoEventP | EP Event (w -> Callback -> IO z) noProp _ = return () widget' :: (WidgetClass w) => IO w -> (a -> w -> IO b) -> EventParm w z -> (w -> IO p) -> ([a] -> Widget a p) widget' create apply eventP prop ps = FG $ \(FGState ws cid cbs) -> do w <- create widgetShow w mapM_ (\a -> apply a w) ps case (eventP) of NoEventP -> do let f c ps = do unless (c == Init) $ mapM_ (\a -> apply a w) ps p <- prop w return (c, (NoEvent, p)) return (FG' f, FGState (AbstrWidget w : ws) cid cbs) (EP e cbF) -> do let f c ps = do unless (c == Init) $ mapM_ (\a -> apply a w) ps p <- prop w case c of Pending id | id == cid -> return (Handled cid, (e, p)) Handled id | id == cid -> return (Done, (NoEvent, p)) _ -> return (c, (NoEvent, p)) let cb f = do cbF w f; return () return (FG' f, FGState (AbstrWidget w : ws) (cid + 1) (PendingCallback cid cb : cbs)) ... --------------------------------------------------------------------------- -- -- Extra Documentation -- {- $ImplementationNotes Arrows essentially build up a huge tree like data structure represting the control flow between arrows. In the current implementation the /entire/ top-level structure has to be traversed when ever an event is fired -- even if absolutely no actions need to be taken. Worse when ever a loop is used the entire loop has to we traversed twice. Consequently, this means that any inner loops will end up being tranversed four times. More generally the deepest most loop will be traversed 2^d times, where d in the depth of loop. Thus FG will obviously not scale well for large applications. By mainating some state information on the value of final value of a loop during a previous event it should be possible to avoid having to traverse a loop twice. However, avoiding the problem of having to traverse the entire tree for every event is much more difficult and require dataflow analysis. Precise analysis will probably require the use of Generalised Algebraic Data Types (GADT) and possible changes to how code is generated when using the arrow notation. -} {- $Requirements FG is based on gtk2hs and uses several GHC extensions. It was tested with GHC 6.2.2 and gtk2hs 0.9.7. -} --
http://www.haskell.org/pipermail/haskell/2005-March/015541.html
CC-MAIN-2014-35
refinedweb
1,562
51.04
The only difference is that I have started reading Paul Graham's On Lisp again ! I am convinced that I will not be programming production level business applications in Lisp in the near foreseeable future. But reading Lisp makes me think differently, the moment I start writing event listeners in Java Swing, I start missing true lexical closures, I look forward to higher level functions in the language. Boilerplates irritate me much more and make me imagine how I could have modeled it better using Scheme macros. True, I have been using the best IDE and leave it to its code generation engine to generate all boilerplates, I have also put forth a bit of an MDA within my development environment that generates much of the codes from the model. I am a big fan of AOP and have been using aspects for quite some time to modularize my designs and generate write-behind logic through the magic of weaving bytecodes. The difference, once again, is that, I have been exposed to the best code generator of all times, the one with simple uniform syntax having access to the whole language parser, that gets the piece of source code in a single uniform data structure and knows how to munch out the desired transformation in a fail-safe manner day in and day out - the Lisp macro. Abstractions - Object Orientation versus Syntax Construction For someone obsessed with OO paradigm, thriving on the backbones of objects, virtual functions and polymorphism, I have learnt to model abstractions in terms of objects and classes (the kingdom of nouns). I define classes on top of the Java language infrastructure, add data members as attributes, add behavior to the abstractions through methods defined within the classes that operate on the attributes and whenever need be, I invoke the methods on an instantiated class object. This is the way I have, so far, learnt to add abstraction to an application layer. Abstraction, as they say, is an artifact of the solution domain, which should ultimately bring you closer to the problem domain. We have : Machine Language -> High Level language -> Abstractions in the Solution Domain -> Problem Domain In case of object oriented languages like Java, the size of the language is monstrous, add to that at least a couple of gigantic frameworks, and abstractions are clear guests on top of the language layer. Lisp, in its original incarnation, was conceived as a language with very little syntax. It was designed as a programmable programming language, and developing abstractions in Lisp, not only enriches the third block above, but a significant part of the second block as well. I now get what Paul Graham has been talking about programming-bottom-up, the extensible language, build-the-language-up-toward-your-program. Take this example : I want to implement dolist(), which effects an operation on each member of a list. With a Lisp implementation, we can have a natural extension of the language through a macro dolist (x '(1 2 3)) (print x) (if (evenp x) (return))) and the moment we define the macro, it blends into the language syntax like a charm. This is abstraction through syntax construction. And, the Java counterpart will be something like : // .. Collection<..> list = ... ; CollectionUtils.dolist(list, new Predicate() { public boolean evaluate() { // .. } }); // .. which provides an object oriented abstraction of the same functionality. This solution provides the necessary abstraction, but is definitely not as seamless an extension of the language as its Lisp counterpart. Extending Extensibility with Metaprogramming Metaprogramming is the art of writing programs which write programs. Languages which offer syntax extensibility provide the normal paths to metaprogramming. And Java is a complete zero in this regard. C offers more trouble to programmers through its whacky macros, while C++'s template metaprogramming facilities are no less hazardous than pure black magic. Ruby offers excellent metaprogramming facilities through its eval()family of methods, the here-docs, open classes, blocks and procedures. Ruby is a language with very clean syntax, having the natural elegance of Lisp and extremely powerful metaprogramming facilities. Ruby metaprogramming capabilities have given a new dimension to the concept of api design in applications. Have a look at this example from a sample Rails application : class Product < ActiveRecord::Base validates_presence_of :title, :description, :image_url validates_format_of :image_url, :with => %r{^http:.+\.(gif|jpg|png)$}i, :message => "must be a URL for a GIF, JPG, or PNG image" end class LineItem < ActiveRecord::Base belongs_to :product end It's really cool DSL made possible through syntax extension capabilities offered by Ruby. It's not much of OO that Rails exploits to offer great api s, instead it's the ability of Ruby to define new syntactic constructs through first class symbols that add to the joy of programming. How will the above LineItemdefinition look in Lisp's database bindings ? Let's take this hypothetical model : (defmodel <line_item> () (belongs_to <product>)) The difference with the above Rails definition is the use of macros in the Lisp version as opposed to class functions in Rails. In the Rails definition, belongs_to is a class function, which when called defines a bunch of member functions in the class LineItem. Note that this is a commonly used idiom in Ruby metaprogramming where we can define methods in the derived class right from the base class. But the main point here is that in the Lisp version, the macros are replaced in the macro expansion phase before the program runs and hence provides an obvious improvement in performance compared to its Rails counterpart. Another great Lispy plus .. Have a look at the following metaprogramming snippet in Ruby, incarnated using class_evalfor generating the accessors in a sample bean class : def self.property(*properties) properties.each do |prop| class_eval <<-EOS def #{prop} () @#{prop} end def #{prop}= (val) @#{prop} = val end EOS end end Here the code which the metaprogram generates is embedded within Ruby here-docs as a string - evaling on a string is not the recommended best practice in the Ruby world. These stringy codes are not treated as first class citizens, in the sense that IDEs do not respect them as code snippets and neither do the debuggers. This has been described in his usual style and detail by Steve Yeggey in this phenomenal blog post. Using define_methodwill make it IDE friendlier, but at the expense of readability and speed. The whacky class_evalruns much faster than the define_methodversion. A rough benchmark indicated that the class_evalversion ran twice as fast on Ruby 1.8.5 than the one using define_method. def self.property(*properties) properties.each do |prop| define_method(prop) { instance_variable_get("@#{prop}") } define_method("#{prop}=") do |value| instance_variable_set("@#{prop}", value) end end end Anyway, all these are examples of dynamic metaprogramming in Ruby since everything gets done at runtime. This is a big difference with Lisp, where the code templates are not typeless strings - they are treated as valid Lisp data structures, which the macro processor can process like normal Lisp code, since macros, in Lisp operates on the parse tree of the program. Thus code templates in Lisp are IDE friendly, debugger friendly and real first class code snippets. Many people have expressed their wish to have Lisp macros in Ruby - Ola Bini has some proposals on that as well. Whatever little I have been through Lisp, Lisp macros are really cool and a definite step forward towards providing succinct extensibility to the language through user defined syntactic control structures. OO Abstractions or Syntax Extensions ? Coming from an OO soaked background, I can only think in terms of OO abstractions. Ruby is, possibly the first language that has pointed me to situations when syntax extensions scale better than OO abstractions - Rails is a live killer example of this paradigm. And finally when I tried to explore the roots, the Lisp macros have really floored me with their succinctness and power. I do not have the courage to say that functional abstractions of Lisp and Ruby are more powerful than OO abstractions. Steve Yeggey has put it so subtly the natural inhibition of OO programmers towards extended syntactic constructs : Lots of programmers, maybe even most of them, are so irrationally afraid of new syntax that they'd rather leaf through hundreds of pages of similar-looking object-oriented calls than accept one new syntactic construct. My personal take will be to exploit all features the language has to offer. With a language like Ruby or Scala or Lisp, syntax extensibility is the natural model. While Java offers powerful OO abstractions - look at the natural difference of paradigms in modeling a Ruby on Rails application and a Spring-Hibernate application. This is one of the great eye-openers that the new dynamic languages have brought to the forefront of OO programmers - beautiful abstractions are no longer a monopoly of OO languages. Lisp tried to force this realization long back, but possibly the world was not ready for it. 7 comments: Pretty nice article; thanks. Syntax abstractions are far and away more powerful than any OOP abstractions because the help create an environment that looks like the problem you're trying to solve. No matter what you do with Java you are still forced the write abominations like the Predicate example, which will always detract from code clarity. Hopefully the syntax for Java 7 will not make the cure worse than the disease, but we shall see. I hope that JRuby moves into the mainstream and we can get (nearly) the best of both worlds (I still *really* like macros :) I don't think we should expect too much on syntax front from Java 7. They are trying to sneak in closures, but I doubt if they will make it usable enough in a pleasing way, since implementing it effectively will imply loss of backwards compatibility, which is the last thing the Java guys want. Even talking about Java 5 features, the smart loop isn't that smart, as Steve Yeggey has pointed out in one of its blogs. I think it is true that Java needs an overhaul to get rid of the legacy syntax. I had blogged about it some time ago. All said and done, I still do Java for a living, since it has no substitute in the enterprise scalability. And I agree that JRuby moving into the mainstream can be the best that can happen for the JVM. Using Ruby elegance with Java collections and tonnes of libraries .. I think this is the combination to look for .. For me, Scala is the best compromise between expressiveness, performance, and real-world pragmatic potential. If performance doesn't matter, then Python/Ruby. I am also a big admirer of Scala. I have blogged extensively on some of the various features of Scala which I liked. See here, here, here and here. Thanks for the interesting post. I would note that when you switch from the Lisp "dolist" macro over to the Java Predicate example, you say "[Java counterpart of] an object oriented abstraction of the same functionality". Then you move onto metaprogramming. The Java solution isn't inherently OO and you don't have to switch to code generation. Another pure OO solution would be to make the object responsible for understanding a dolist message. This is done at design time in the IDE or running image. True, Java cannot support this level of flexibility, but it isn't terribly OO, just C++ done right :) Ruby does have better support for late-binding and dynamic class definition. It can also provide a new OO iteration construct without switching to meta-programming. Hi Debasish, what do u think?
http://debasishg.blogspot.com/2007/01/syntax-extensibility-ruby.html
CC-MAIN-2018-13
refinedweb
1,931
50.06
Greetings, If the documentation I seek does not exist and my questions get answered, I'd be happy to write the manual. I have the following additional questions. 1. What is the relationship between abcl and Java threads? Can multiple abcl lisp function be run at the same time? Can other Java threads be run while abcl is running? 2. What is the relationship between Java exceptions and abcl? What happens if a Java exception gets raised when abcl calls a Java method? Does abcl throw Java exceptions? Is there any special cleanup required when this happens? 3. Can you create Java classes from abcl? Thanks. Blake McBride Greetings, Is there any abcl specific documentation? Specifically, what I am looking for is documentation for: 1. How to call abcl from Java 2. How to call Java from abcl 3. How to pass and return lisp and Java values and how they are represented on each side 4. What are the .cls files? Thanks. Blake McBride > Hi, I'm using abcl library for a university project. > I think it's very valid and powerful, > but I can not redirect the output of interpreter.eval () > in a string. How can I do? i'm not sure what do you mean by "output of eval". if you mean a return value, that is a LispObject, you can convert it to string like this: LispObject result = Lisp.eval(obj, new Environment(), thread); out.println(result.writeToString()); if want to capture what is printed to output stream during eval, easiest way would be to use with-output-to-string on Lisp side, and capturing its output. another solution is to supply your own stream as a default one -- either pass it to Interpreter's constructor, or use interpreter.resetIO() Hi, I'm using abcl library for a university project. I think it's very valid and powerful, but I can not redirect the output of interpreter.eval () in a string. How can I do? Thank you for your kind attention The exception to the GPL that the license includes requires that the modules linked with it be "independent" and not derived or based on it. That seems to mean that since my program would have to link to it from the outside requiring it as a library rather than independent common lisp code not dependent or specifically linking to it that my scheme code would be "derived" from it and would have be GPL to not violate it. This makes abcl unusable for any of my needs. if Scheme didn't work out for the main code, I would still have to avoid it so it wouldn't be stuck in walled garden or GPL jail. Since the jvm/class path are going with the same exception that wierds me out more, but I don't think I'll need to drop the jvm too. It seems that Sun's & FSF think that that exception works as intended to allow linking through from both sides but the wording doesn't seem so at all. That is where my initial confusion came from. It would be nice if the of abcl license was changed so I could access common lisp libraries from outside scheme libraries/programs that have to be linkable from both GPL and proprietary software Since there is no other jvm common lisp, I'm going to have to rely on jacl (tcl) and java (ugh) for more of my library needs. Scheme just doesn't have enough to complete a online game on its own. At least with Scheme, I can try jscheme when slib fails to work with kawa scheme. - Jason 2008/6/1 Peter Graves > > On Fri, 30 May 2008 at 22:36:33 +0200, Erik Huelsmann wrote: > In a piece of Java code, I'd like to do something like the lisp code below: > > (let ((*handler-clusters* (cons (list 'error #'foo) *handler-clusters*))) > ... <my code>) > > When looking at Symbol.java:getDescription(), I see a construct like this: > > final LispThread thread = LispThread.currentThread(); > SpecialBinding lastSpecialBinding = thread.lastSpecialBinding; > thread.bindSpecial(Symbol.PRINT_ESCAPE, NIL); > try > { > .... > } > finally > { > thread.lastSpecialBinding = lastSpecialBinding; > } > > To me, that looks like something I should be able to do in my code > too. However, while LispThread.lastSpecialBinding is public, its type > SpecialBinding is not... > > I presume this is a bug? > > If it is, the patch below fixes this issue: > > Index: SpecialBinding.java > =================================================================== > RCS file: > /cvsroot/armedbear-j/j/j/src/org/armedbear/lisp/SpecialBinding.java,v > retrieving revision 1.1 diff -u -r1.1 SpecialBinding.java --- > SpecialBinding.java 28 Feb 2005 02:48:42 -0000 1.1 +++ > SpecialBinding.java 30 May 2008 20:33:40 -0000 @@ -22,7 +22,7 @@ > package org.armedbear.lisp; > > // Package accessibility. > -final class SpecialBinding > +final public class SpecialBinding > { > final LispObject name; > LispObject value; > > Committed. Thanks!
https://sourceforge.net/p/armedbear-j/mailman/armedbear-j-devel/?viewmonth=200806
CC-MAIN-2017-39
refinedweb
799
66.74
Modern user interfaces provide widgets such as sliders and rotary dials, for example, volume controls. Such user interface controls can be viewed as a special case of selection control where the underlying set of choices has additional structure in that the available values are well ordered. XForms defines a generic range control that can be used to pick a value from a set of well-ordered values. Element range returns a single value from the set of available values. As with other XForms controls, the set of available values is declared in the XForms model. Thus, it is meaningless to bind control range to types whose value space is not well ordered. Element range accepts all the common attributes and child elements described in Section 3.2; in addition, special attributes on element range are used to tune the presentation and interaction behavior of the resulting control—see Figure for an example of a volume control authored using element range. <html xmlns=""> <head><title>Volume Control</title> <model xmlns="" id="sound" schema="units.xsd"> <instance> <settings xmlns=""> <volume xsi:... </settings></instance> </model></head> <body> <group xmlns=""> <range model="sound" ref="/settings/volume" appearance="full" step="5"> <label>Volume</label> <help>...</help><hint>...</hint> </range></group> </body></html> start Optional attribute start specifies the start value to be made available by the control. By default, the start value is the minimum permissible value as defined in the model. end Optional attribute end specifies the maximum value to be made available by this control. By default, the end value is the maximum permissible value as defined in the model. step Attribute step determines the offset used when moving through the set of available values. If specified, it should be appropriate for expressing the difference between two valid values from the underlying set of values. As an example, when picking from an ordered set of numbers, for example, when setting the volume, specifying step=5 would change the volume in steps of 5. Notice that the volume control shown in Figure uses the minimum and maximum permissible values defined in the model rather than further constraining these via attributes start and end. Attribute step specifies that the volume should be changed in steps of 5. Attribute appearance is set to full to request that the control be presented with the full range of available values; as a result, a visual interface might present this control as a slider that shows both the minimum and maximum acceptable values—see Figure. In contrast, specifying a value of minimal for attribute appearance might result in a presentation that takes up less display real estate. Specialized widgets such as rotary controls or spin dials might be requested by specifying a namespace qualified value such as appearance="my:dial". This is similar to requesting a custom date picker as illustrated in Section 3.3. Notice that this design permits the author to create user interfaces that degrade gracefully, that is, the control can be presented as a spin dial on a device that makes such a widget available; however, the interface is still usable on a device that does not contain a spin dial widget. Alternatively, devices that contain a spin dial might choose to use that representation for presenting all range controls; this enables the XForms author to create user interfaces that eventually get delivered in a manner that is optimal for the target
http://codeidol.com/community/dotnet/selecting-from-a-range-of-values/16783/
CC-MAIN-2017-47
refinedweb
567
50.46
import xlwt from datetime import datetime wb = xlwt.Workbook() ws = wb.add_sheet("A Test Sheet") #creates a new work sheet in the work book. The string is the name #ws.write(0, 0, "something") #ws.write(0, 10, "something") list = ["blaw", "blaw1", "blaw2"] for i in list: someInput = int(raw_input("Input some something here please")) right = 0 down = 0 ws.write(down, right, someInput) right = right + 1 down = down + 1 wb.save("testworkbook.xls") #saves the work book. The string is the name Anyone help with xlwt?Page 1 of 1 1 Replies - 470 Views - Last Post: 20 August 2014 - 07:58 PM #1 Anyone help with xlwt? Posted 20 August 2014 - 07:49 PM I am trying to use xlwt to write out input from a user to a excel spreed sheet but I don't understand how I increment the cell values for each item in a list. Replies To: Anyone help with xlwt? #2 Re: Anyone help with xlwt? Posted 20 August 2014 - 07:58 PM lol, I am so dumb I had the loop resetting the values back every time sorry for being a noob please delete this thread for the love of good and my future children. Page 1 of 1
http://www.dreamincode.net/forums/topic/352377-anyone-help-with-xlwt/
CC-MAIN-2017-26
refinedweb
206
84.17
Learn to use Visual Studio, Visual Studio Online, Application Insights and Team Foundation Server to decrease rework, increase transparency into your application and increase the rate at which you can ship high quality software throughout the application lifecycle This post (addressing uservoice feedback on CA) was written by Nat Ayewah, a member of the code analysis team in Windows This post (addressing uservoice feedback on CA) was written by Nat Ayewah, a member of the code analysis team in Windows Last year's release of Visual Studio 2012 marked a significant update to the Code Analysis experience in Visual Studio. We made code analysis available in more editions of Visual Studio, introduced a new user interface for viewing, filtering and stepping through results, and made accuracy and other improvements. In Visual Studio 2013, our focus has been on fixing bugs in response to user feedback, and making a few more improvements to the user experience. Highlights include: Visual Studio 2013 introduces categories for native rules and exposes the existing managed code analysis categories in the user interface. These categories provide a more fine-grained grouping of defects to indicate, for example, if the defect is related to an annotation syntax error, a critical security vulnerability or a simple logic error. Categories are particularly helpful when dealing with a large list of warnings, which can be overwhelming without some guidance on which warnings to focus on first. With this change, users can focus their efforts on the categories that are most relevant to their needs. Users will immediately notice the new categories because they augment the results displayed in the code analysis viewer: Users also have the option to filter the results by category using the search box, or select a specific category from a new dropdown button. By design, this button replaces the Error/Warning option that was in Visual Studio 2012. Users can still use the search box to separate errors from warnings. Visual Studio 2012 moved code analysis results out of the error list and into a new Code Analysis Viewer that makes it easier to read and filter results. It also provides a detailed explanation of the code path for some warnings. One key feature of the error list that was missing in the new viewer was the ability to sort the defect list. Visual Studio 2013 adds support for sorting to the new viewer by way of a new toolbar Sort button. Users can sort the defect list by six common properties or reset the list to its default sort order. Selecting a sort property twice results in a descending order sort: The code analysis team received lots of useful feedback from users that was used to improve the accuracy of the analysis for native code analysis. We also worked with partners to improve the quality of headers shipped with Windows and Drivers Kits. Please try out Visual Studio 2013 and check out the new Code Analysis features. We would love to hear any questions or comments you have in the comments below or on our MSDN forum. Great that the headers shipping with Windows SDK/DDK are improved. Just need the MFC/ATL headers to be correctly annotated now.... Are there any changes to the annotations available, or are the VS2012 list of annotations still the current list? Any plans to update standalone FxCop seeing that Code Analysis is still not supported in the Professional Edition? A huge deal would be if you would add a filtering option that would allow the developer to filter out CA messages only for the files that are currently checked-out. The reason is that when the existing codebase contains many warnings it is very easy to introduce new ones just because you don't find your changes in the list. @Knaģis, why not run code analysis on the whole project. Then suppress them all in the Global Suppression file, as and when you change code you will invalidate the global suppression and be force to fix them. @Josh We’ve improved annotations in MFC/ATL headers, but there’s no change in terms of annotations available in VS 2013 compared with VS 2012. @Carel Full code analysis functionality was moved to the Professional addition of Visual Studio in 2012, with limited functionality included in the Express editions What are the plans for FxCop? It seems like it's being retired. Is there any plan to add the new Code Analysis to SSDT? Nope, at this time there are no plans to add the new Code Analysis to SSDT @Vince, Are you referring to the FxCop standalone, or FxCop in Visual Studio? We removed the standalone FxCop from the Windows SDK for several reasons, including 1. There was a desire to reduce the size of the Windows SDK, and distinguish between components that were truly Windows versus Visual Studio components. 2. As I mentioned below full FxCop support was moved into the Professional Edition of Visual Studio 2012 with limited support included in the Express versions. FxCop continues to be fully supported in the Visual Studio IDE. This is a great step forward - I really missed being able to sort the CA warnings. One problem that I'm seeing with the new 2012/2013 window on my end is that it isn't always obvious to developers that there are CA warnings after a build. Devs are used to checking the error list after building, and ignoring everything else. This means that code often gets checked in with CA warning violations, which we want to avoid. It would be cool to have an option (something property that could be set in the csproj file) that would cause the existence of any CA warning violations to output a single warning in the Error List. Something like "Warning: there were one or more Code Analysis issues; see the {link}Code Analysis{/link} window for details. @RobSiklos Thanks for the great feedback. This experience is something that we are looking to improve in the future, while unfortunately it won't make it into Visual Studio 2013 RTM, it is on our radar. As a workaround at the moment, you could consider creating a custom ruleset where you set the action for code analysis warnings to be Error which will fail your build if there are any. Are there any improvements to exporting the CA issues to Excel or Word? This was way better in VS2010 when coming from the Errors window. In VS2012 from the Code Analysis window copy / paste doesn't cut it ;) Pasting in Excel does not result in separate columns for the issue number, title, file, line # etc. @John, The behavior you want is still available, in Excel right click and choose "Paste special...", then choose "csv". When we introduced the new code analysis window we added HTML formatting for pasting results into email per request, so when you copy from the window it puts the data into the clipboard as html, csv, and plain text. Excel takes the HTML by default over the csv (I don't know the rational for this), but you can tell it to take the csv and it will paste each of these fields into a separate column. Andrew, Should this raise a "using uninitialized memory" warning? #include <sal.h> void SetToZero(_Out_ int& out) { int* pi = &out; *pi = 0; } It complains when taking the address of "out". Other operations are fine. Is there a different annotation that I should be using? This seems to be new to VS2013, but I don't know if that's because of a bug, or because of better checking -- I haven't put in enough time to actually learn SAL properly :-/ .
http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx
CC-MAIN-2015-06
refinedweb
1,285
57.5
When building web applications and sites using React, you have to think carefully about the user interface. You might either go with an established user interface library, develop your own, or try a bit of both. One of the aspects that often is forgotten in UI design is accessibility - how can you make sure as many people as possible are able to use your creation? That is where using a user interface library that's accessible out of the box can come in handy. To learn more about such an option, I am interviewing Diego Haz, the creator of Reakit. My name is Diego Haz. I'm 29, and I've been programming for about 17 years. I started building Open Source Software (OSS) four years ago. I often say I don't like to code. I want to build things for humans and to impact their lives positively. Code is just the way I found to do that. I could be a dancer, but I'm terrible at dancing, so it's code. 😄 OSS is fantastic for achieving this. I can build one solution so many humans (developers) can use it to create many other solutions for even more humans. Besides that, I'm married to Grace Kelly with a five years old stepson. I'm autistic (Asperger), I love astronomy, and I hope someday I'll help solve hunger in the world by automating all the processes from the production of the food to its distribution. Automation is the key. Reakit is a low-level component library for building accessible high-level UI libraries, design systems, and applications with React. It provides components like Dialog, Tab, Tooltip, Form, among others that follow all the WAI-ARIA recommendations. You could design a dialog using Reakit as below: import { useDialogState, Dialog, DialogDisclosure, } from "reakit/Dialog"; function MyDialog() { // dialog exposes `visible` state and // methods like `show`, `hide` and `toggle` const dialog = useDialogState(); return ( <> <DialogDisclosure {...dialog}> Open dialog </DialogDisclosure> <Dialog {...dialog} Welcome to Reakit </Dialog> </> ); } If accessibility matters to you (and there's only one correct answer to this), you should use Reakit components as your foundation. You can play with the example on CodeSandbox. You can install Reakit through npm: npm install reakit And then, use it like this: import React from "react"; import ReactDOM from "react-dom"; import { Button } from "reakit/Button"; function App() { return <Button>Button</Button>; } ReactDOM.render(<App />, document.getElementById("root")); Components can be imported directly from reakit or using separate paths like reakit/Button. The latter is preferred if your bundler doesn't support tree shaking. import { Button } from "reakit"; import { Button as Button2 } from "reakit/Button"; if (Button === Button2) { console.log("They point to the same file"); } If you use Babel, you can rewrite the imports using babel-plugin-transform-imports. This way you can use import { Button } from "reakit";while gaining tree shaking. The idea works with other packages too. The highest level API (which is still low level for most use cases) of Reakit exports React components. They receive two kinds of props: options and HTML props. Options are just custom props that don't get rendered into the DOM. They affect internal component behavior and translate to actual HTML attributes: import { Hidden } from "reakit/Hidden"; // `visible` is an option // `className` is an HTML prop <Hidden visible; Besides that, all components can be augmented with the as prop and render props. <Hidden as="button" /> <Hidden>{hiddenProps => <button {...hiddenProps} />}</Hidden> Reakit provides state hooks out of the box and you can also plug in your own. They receive some options as the initial state and return options needed by their respective components: import { useHiddenState, Hidden } from "reakit/Hidden"; function Example() { // exposes `visible` state and // methods like `show`, `hide` and `toggle` const hidden = useHiddenState({ visible: true }); return ( <> <button onClick={hidden.toggle}>Disclosure</button> <Hidden {...hidden}>Hidden</Hidden> </> ); } As the lowest level API, Reakit exposes props hooks. These hooks hold most of the logic behind components and are used heavily within Reakit's source code as a means to compose behaviors without the hassle of polluting the tree with multiple components. For example, Dialog uses Hidden, which in turn uses Box: import { useHiddenState, useHidden, useHiddenDisclosure, } from "reakit/Hidden"; function Example() { const state = useHiddenState({ visible: true }); const props = useHidden(state); const disclosureProps = useHiddenDisclosure(state); return ( <> <button {...disclosureProps}>Disclosure</button> <div {...props}>Hidden</div> </> ); } Reakit doesn't depend on any CSS library and components are without styling by default. You're free to use whatever approach you want. Each component returns a single HTML element that accepts all HTML props, including className and style. Learn more about styling. The main difference is that it's entirely focused on accessibility. It's also low level enough so other solutions (like Material UI, Ant Design, Semantic UI React, etc.) could use it underneath. A similar library that also focuses on accessibility is Reach UI by Ryan Florence. It is a fantastic library, but the design choices make it harder to compose and customize. A good example of this is the use of implicit React Context. I prefer to give specific pieces so users can build new things without being tied to my design choices. They have more control over what they're making. You can always go from explicit to implicit (for example, you can build a React Context component API using Reakit API). But the other way around is hard. Here's an example of a high level Tooltip API built upon the low level Reakit API: import { Tooltip as BaseTooltip, TooltipReference, useTooltipState, } from "reakit"; function App() { return ( <Tooltip title="Tooltip"> <button>Reference</button> </Tooltip> ); } function Tooltip({ children, title, ...props }) { const tooltip = useTooltipState(); return ( <> <TooltipReference {...tooltip}> {(referenceProps) => React.cloneElement( React.Children.only(children), referenceProps ) } </TooltipReference> <BaseTooltip {...tooltip} {...props}> {title} </BaseTooltip> </> ); } If you want something composable and low level, you should choose Reakit. If you're looking for something already abstracted, with less boilerplate, easier to use, and with restrictions that make it harder to make mistakes, I recommend Reach UI. I started building Reakit in 2017 to ease my team's job as we were creating most of our components from scratch, and they weren't accessible at all. As an autistic person, I don't have any disability that makes the web inaccessible to me. But I do have disabilities that cause a part of the world to be unavailable to me. I know how it feels not to be able to do what most people do. And this motivates me even more to work on Reakit. I'm currently talking with a few companies so I can work with them and possibly use Reakit on real-world projects. Doing this will help me find real use cases and improve the library. Once v1.0 gets out of beta, I'll start building some paid products and services around it. The goal is to make Reakit self-sustainable, with, at least, one developer dedicated to it full-time. In 20 to 30 years, I believe that websites — and software in general — will not be made by humans anymore. Companies will upload their business requirements and their audience data into an AI, which, after testing infinite versions of the software with unlimited versions of simulated people, will respond with the best ready-to-use application based on the available data. Code and design will be fully automated. After all, there's no subjectiveness on this: the version which better performs is usually the best version. It's hard to see now, but Reakit and all the products I'm planning to build around it are my first step into this direction. Learn to learn. Web development and front-end development specifically are evolving fast, and knowing how to learn things is the best ability one can have. Get used to watching videos in 2x speed (or quicker), learn how to search effectively, etc. Don't forget to support us on Open Collective. ❤️ Thanks for the interview, Diego! I think Reakit hits a good balance between providing functionality while letting developers to customize it to their own use cases. The greatest benefit of the approach is that it allows people to bootstrap their own UI libraries without having to develop everything from scratch while gaining functionality and avoiding some of the maintenance cost. To learn more about the project, take a look at Reakit website and star Reakit on GitHub.
https://survivejs.com/blog/reakit-interview/
CC-MAIN-2022-40
refinedweb
1,400
55.13
> hpbios.rar > ALL_SSR.INC ; []===========================================================[] ; ; NOTICE: THIS PROGRAM BELONGS TO AWARD SOFTWARE INTERNATIONAL(R) ; INC. IT IS CONSIDERED A TRADE SECRET AND IS NOT TO BE ; DIVULGED OR USED BY PARTIES WHO HAVE NOT RECEIVED ; WRITTEN AUTHORIZATION FROM THE OWNER. ; ; []===========================================================[] ; ;---------------------------------------------------------------------------- ;Rev Date Name Description ;---------------------------------------------------------------------------- ;R09 04/27/99 PAL Change sensor include method ;R08 04/26/99 PAL Added LM84_Support ;R07 04/19/99 JSN Add AliM5879 Hardware Monitor Support. ;R06 03/11/99 JMS Added "USE_DUAL_SENSOR" for support Dual sensor ;R05 03/02/99 PAL Added LM81_Support ;R04A 12/23/98 GAR Superio_Support_Sensor must be defined if Sensor of ; ITE8693 is used ;R04 12/01/98 GAR Rename "Superio_Support_Thermal" to ; "Superio_No_Support_Thermal" ;R03 11/05/98 RIC Add VIA686 Hardware Monitor Support. ;R02 10/27/98 GAR Add ITE8693 ;R01 10/13/98 TNY Add "Xeon_Thermal" slot2 CPU support. ;R00 09/03/98 RAY Initial Revison ;R02 - start ;R04 ifdef Superio_Support_Thermal ;R04A ifndef Superio_No_Support_Thermal ;R04 ;R04A ifdef ITE8693 ;R04A include ITE8693.SSR ;R04A endif ;ITE8693 ;R04A endif ;Superio_No_Support_Thermal ;R04 ;R04 endif ;Superio_Support_Thermal ;R02 - end ;R09 - start include misc.equ IF SENSOR_KERNEL include Sensor.SSR include SECOND.SSR ENDIF; SENSOR_KERNEL ;R09 - end ;R09;R04A - start ;R09 ifdef Superio_Support_Sensor ;R09 include Sensor.SSR ;R09 endif ;Superio_Support_Sensor ;R09;R04A - end ;R09 ifdef LM75_Support ;R09 include LM75.SSR ;R09 endif ;LM75_Support ;R09 ;R09 ifdef LM78_Support ;R09 include LM78.SSR ;R09 endif ;LM78_Support ;R09 ;R09 ifdef LM80_Support ;R09 include LM80.SSR ;R09 endif ;LM80_Support ;R09 ;R09 ifdef GL520SM_Support ;R09 include GL520SM.SSR ;R09 endif ;GL520SM_Support ;R09 ;R09 ifdef GL518SM_Support ;R09 include GL518SM.SSR ;R09 endif ;GL518SM_Support ;R09 ;R09 ifdef W83781D_Support ;R09 include W83781D.SSR ;R09 endif ;W83781D_Support ;R09 ;R09 ifdef HT82791_Support ;R09 include HT82791.SSR ;R09 endif ;HT82791_Support ;R09 ;R09 ifdef SIS5595_Support ;R09 include SIS5595.SSR ;R09 endif ;SIS5595_Support ;R09 ;R09 ifdef MAX1617_Support ;R09 include MAX1617.SSR ;R09 endif ;MAX1617_Support ;R09 ;R09 ifdef ADM9240_Support ;R09 include ADM9240.SSR ;R09 endif ;ADM9240_Support ;R09 ;R09 ifdef W83782D_Support ;R09 include W83782D.SSR ;R09 endif ;W83782D_Support ;R09 ;R09 ifdef W83783S_Support ;R09 include W83783S.SSR ;R09 endif ;W83783S_Support ;R09 ;R09 ifdef AliM5879_Support ;R07 ;R09 include AliM5879.SSR ;R07 ;R09 endif ;AliM5879_Support ;R07 ;R09 ;R09 ifdef Xeon_Thermal ;R01 ;R09 include XEON.SSR ;R01 ;R09 endif ;Xeon_Thermal ;R01 ;R09 ;R09 ifdef VIA686HM_Support ;R03 ;R09 include VIA686HM.SSR ;R03 ;R09 endif ;VIA686HM_Support ;R03 ;R09 ;R09 ifdef LM81_Support ;R05 ;R09 include LM81.SSR ;R05 ;R09 endif ;LM81_Support ;R05 ;R09 ;R09 ifdef LM84_Support ;R08 ;R09 include LM84.SSR ;R08 ;R09 endif ;LM84_Support ;R08 ;R09 ;R09 ifdef USE_DUAL_SENSOR ;R06 ;R09 include SECOND.SSR ;R06 ;R09 endif ;USE_DUAL_SENSOR ;R06
http://read.pudn.com/downloads22/sourcecode/asm/71739/hpbios/ALL_SSR.INC__.htm
crawl-002
refinedweb
421
50.02
Is there an library for C compiler which will add C++'s features? Printable View Is there an library for C compiler which will add C++'s features? Not really, since C++ is a different language. Some compilers (and the C99 standard) has cherry picked a few things that they find handy out of the C++ language (variable sized arrays and "variable declarations anywhere" being the most obvious ones). On the one hand, you could say that all C++ features can actualy be achieved in C. On the other hand, C++ _IS_ a different language, and to add those features to C would require changing the compiler - you can't just use a library as replacement. But bear in mind that the first C++ compiler actually produced C code, and used a standard C compiler to generate machine code. -- Mats Is there a reason you can't use C++ instead? I don't know of such a library, but if you need to implement an object orientated design in C, then there are different strategies for this. Some links that might be useful: ldeniau.web.cern.ch/ldeniau/html/oopc/oopc.html >>Not really, since C++ is a different language. Some compilers (and the C99 standard) has cherry picked a few things that they find handy out of the C++ language (variable sized arrays and "variable declarations anywhere" being the most obvious ones). When did C++ get variable sized arrays? You can do that in C99, so it's not really a C++ feature :) But since when did C++ allow arrays with a non-compile time constant arrays? Your example doesn't compile, mats! Ooops my bad ;). Sorry matsp. Sorry, C++ doesn't technically support that, but gcc does as an extension - I got myself confused [again]. I'm not sure if C99 does support it or not - gcc -std=c99 -pedantic does compile this: whilst g++ -std=c++98 -pedantic does not. Remove pedantic and it does.whilst g++ -std=c++98 -pedantic does not. Remove pedantic and it does.Code: #include <stdio.h> double func(const int n) { int arr[n]; double d = 0; for(int i = 0; i < n; i++) { arr[i] = 1 << i; } for(int j = 0; j < n; j++) { d += 1.0 / arr[j]; } return d; } int main() { int n; printf("Enter number of elements:"); scanf("%d", &n); printf("Result=%f\n", func(n)); return 0; } -- Mats VLAs were a GCC extension and were adopted in C99. They're not part of any C++ standard, and won't be part of C++0x, AFAIK. GCC's documentation says, however, that their implementation of VLAs is not 100% compliant with the C99 definition. A good library for getting object-oriented functionality into C is glib/gobject. A library I am using compiles only with a C compiler, but not with a C++ compiler. My platform and compiler is outdated. Also no C++ compiler available with full (compared to recently compilers) standard support. C++ as "addon" for a standard C compiler would have been a nice thing, but if it doesn't exists I am out of luck. That would be quite impossible. Different syntax means the compiler itself must change. I'm not sure if I understand your problem well, but you can mix C and C++ code. In short you compile your C code with the C compiler and C++ code with the C++ compiler. By supplying linkage specifications you're able to use C code within C++ code.
http://cboard.cprogramming.com/c-programming/106809-adding-cplusplus-support-c-thought-library-printable-thread.html
CC-MAIN-2015-32
refinedweb
584
73.27
in reply to Re^3: shebang anomalyin thread shebang anomaly I contacted the maintainers of GNU coreutils of which env is a component and the answer I got really pissed me off, so I am forced to write my own env for use in debian (env behaves as expected in macOS). Printing the environment is the least difficult of what env does. That can be done with six lines of C, as in: #include <stdio.h> int main(int argc, char** argv, char** envp) { while (*envp++) { printf("%s\n", envp[-1]); } } [download] The more challenging part, or parts, is the other things that env does and to make it work the same whether it is called from the command line or as a shebang. I subscribe to the principle of "least surprise", as in this article: I'll start with the code in cmd, as reported by choroba in an earlier post and go from there. It is fairly obvious to me that a well-behaved env could be implemented in any of C, bash or Perl. I might try them all, just to explore the possibilities. The GNU coreutils team should learn some humility, since a utility program's purpose is to enable users to do what they want to do, not assert some arrogant notion of "not-invented-here." I maintain that GNU env is either stupid or broken because it does not behave in a shebang the way it does when called from a command line. These two methods of invocation should not produce different results, no matter how aggressively someone asserts the program behaves correctly.. Why do you think this works in macOS and not Linux Because the shell used to parse and execute the shebang line is different in each of those. Atta(perl)boy!!! But beware the can of worms Anonymous Monk already warned you about, re: the breaking of the command line, shell escapes and nested quotes. Strict Warnings Results (63 votes). Check out past polls.
https://www.perlmonks.org/index.pl/?node_id=1213258
CC-MAIN-2019-47
refinedweb
333
65.15
Let's walk through a typical session. We'll set up the environment, read what keys are present on the device, and print some summary information about them. First of all, we'll include the header we created in section 3.3 #include "pkcs11_linux.h" Most PKCS #11 functions return a status code of the type CK_RV. Because we do not want to make an example that doesn't use error checking at all, but we do not want to clutter up the code with graceful error handling, we'll just provide a function that checks the return value and exits with an error message if there was a problem. The standard specifies exactly what error codes all functions can return, so graceful error handling should not be too much of a problem. A good idea is to at least map the error code back to its name. void check_return_value(CK_RV rv, const char *message) { if (rv != CKR_OK) { fprintf(stderr, "Error at %s: %u\n", message, (unsigned int)rv); exit(EXIT_FAILURE); } } So, now that we have that out of the way, let's initialize the environment. CK_RV initialize() { return C_Initialize(NULL); } The initialization function has one argument. In it, you can specify some global options. These all have to do with threading. See section 6.6 and section 11.4 of the specification for more information. Since we are not making a threaded application here, we can safely pass NULL. With the library initialized, it's time to select a slot to use. PKCS providers usually provide multiple slots that can take tokens. In this case, we just want the first one. CK_SLOT_ID get_slot() { CK_RV rv; CK_SLOT_ID slotId; CK_ULONG slotCount = 10; CK_SLOT_ID *slotIds = malloc(sizeof(CK_SLOT_ID) * slotCount); rv = C_GetSlotList(CK_TRUE, slotIds, &slotCount); check_return_value(rv, "get slot list"); if (slotCount < 1) { fprintf(stderr, "Error; could not find any slots\n"); exit(1); } slotId = slotIds[0]; free(slotIds); printf("slot count: %d\n", (int)slotCount); return slotId; } The C_GetSlotList function takes three arguments, the first one is a boolean value that specifies whether to only return slots that actually have a token present at the moment, or just return any slots. The second is an array to place the slot ID values in, and the third arguments is a pointer to the number of slots we have allocated memory for. As explained in 3.2, if the second argument would have been NULL, the number of slots found will be set in the third argument. This value could be used to allocate exactly the right amount of memory, after which the function would be called again. This mechanism is used often in PKCS #11. In this case we'll do it in one step, and assume our reader does not have more than 10 slots. If it does, this function should return the error code CKR_BUFFER_TOO_SMALL. Some implementations might just return the first 10 slots, although you shouldn't count on that. So, if everything went right, we have a slot with a token now. For that slot, we'll start a session. This session will be the context for all actions we do later. CK_SESSION_HANDLE start_session(CK_SLOT_ID slotId) { CK_RV rv; CK_SESSION_HANDLE session; rv = C_OpenSession(slotId, CKF_SERIAL_SESSION, NULL, NULL, &session); check_return_value(rv, "open session"); return session; } The second argument is a flags parameter, in which certain settings can be set. CKF_SERIAL_SESSION must always be set. If we would want to have write access to the token (which we don't for just reading keys), we would XOR this with the value CKF_RW_SESSION. If we would want the library to notify it of certain events in the session, we would provide a callback function as the third argument. See section 11.17 of the specification for more information on this. The final argument is a pointer to the CK_SESSION_HANDLE variable that we'll use to refer to the session we have opened. If the token needs a user to provide a PIN, we can use the function C_login. The pin is an array of bytes. Some implementations accept a NULL pin if it wasn't set, although this does not follow the specification. void login(CK_SESSION_HANDLE session, CK_BYTE *pin) { CK_RV rv; if (pin) { rv = C_Login(session, CKU_USER, pin, strlen((char *)pin)); check_return_value(rv, "log in"); } } That's the initialization. Now we could go and actually do something. But, for completeness' sake, let's first get the cleanup routines out of the way, these are not much more than the reversed function made so far. void logout(CK_SESSION_HANDLE session) { CK_RV rv; rv = C_Logout(session); if (rv != CKR_USER_NOT_LOGGED_IN) { check_return_value(rv, "log out"); } } We do one additional check on the return value; if we did not pass a PIN to the login function, we would not be logged in at all, resulting in the error CKR_USER_NOT_LOGGED_IN. Rather than keeping track of whether we should have been logged in, we'll just watch out for this error. Two things remain: we are still in a session, and we need to clean up the library. void end_session(CK_SESSION_HANDLE session) { CK_RV rv; rv = C_CloseSession(session); check_return_value(rv, "close session"); } void finalize() { C_Finalize(NULL); }
https://www.nlnetlabs.nl/downloads/publications/hsm/hsm_node13.html
CC-MAIN-2018-09
refinedweb
854
63.7
Use this procedure to completely remove and reset the LUN configuration. If you reset a LUN configuration, a new device ID number is assigned to LUN 0. This change occurs because the software assigns a new world wide name (WWN) to the new LUN. From one node that is connected to the storage array or storage system, determine the paths to the LUNs you are resetting. For example: (A1000 Only) Is the LUN a quorum device? This LUN is the LUN that you are resetting. If no, proceed to Step 3. If yes, relocate that quorum device to another suitable storage array. For procedures about how to add and remove quorum devices, see your Sun Cluster system administration documentation. For a list of Sun Cluster documentation, see Related Documentation. Does a volume manager manage the LUNs on the controller module you are resetting? If no, proceed to Step 4.. On one node, reset the LUN configuration. For the procedure about how to reset the LUN configuration, see the Sun StorEdge RAID Manager User’s Guide. (StorEdge A3500 Only) Set the controller module back to active/active mode. For more information about controller modes, see the Sun StorEdge RAID Manager Installation and Support Guide and the Sun StorEdge RAID Manager User’s Guide. Use the format command to label the new LUN 0. Remove the paths to the old LUNs you reset. (StorEdge A3500 Only) Determine the alternate paths to the old LUNs you reset. Use the lad command. The RAID Manager software creates two paths to the LUN in the /dev/osa/dev/rdsk directory. Substitute the cNtXdY number from the other controller module in the storage array to determine the alternate path. Example: Therefore, the alternate paths are as follows: (StorEdge A3500 Only) Remove the alternate paths to the old LUNs you reset. On both nodes, update device namespaces. On both nodes, remove all obsolete device IDs. Move all resource groups and device groups off the node. Shut down the node. For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation. For a list of Sun Cluster documentation, see Related Documentation. Perform a reconfiguration boot to create the new Solaris device files and links. If an error message like the following appears, ignore it. Continue with the next step. device id for '/dev/rdsk/c0t5d0' does not match physical disk's id. After the node reboots and joins the cluster, repeat Step 7 through Step 14 on the other node. This node is attached to the storage array or storage system. The device ID number for the original LUN 0 is removed. A new device ID is assigned to LUN 0.
http://docs.oracle.com/cd/E19199-01/817-5682/resetlun/index.html
CC-MAIN-2016-07
refinedweb
452
67.76
The while loop is the most fundamental loop statement of Java. It repeats a statement or block while its controlling expression is true. Here is the general form of while loop in Java: while(condition) { // body of the loop } The condition can be any Boolean expression. The body of the loop will be executed as long as the conditional expression is true. When the condition becomes false, the control passes to the next line of code immediately following the loop. The curly braces are unnecessary if only a single statement is being repeated. Following is a while loop that counts down from 10, printing exactly 10 lines of "tick" : /* Java Program Example - Java while Loop * Demonstrate the while Loop */ public class JavaProgram { public static void main(String args[]) { int n = 10; while(n>0) { System.out.println("tick " + n); n--; } } } When the above Java program is compile and executed, it will produce the following output: Since the while loop evaluates its conditional expression at the top of the loop, the body of the loop will not execute even once if the condition is false to begin with. For example, in the following code fragment, the call to println() is never executed: int a = 10, b = 20; while(a > b) { System.out.println("This will not be displayed."); } The body of the while loop can be empty. This is because a null statement (once that consists only of a semicolon) is syntactically valid in Java. For example, consider the following program: /* Java Program Example - Java while Loop * The target of a loop can be empty */ public class JavaProgram { public static void main(String args[]) { int i, j; i = 100; j = 200; // find midpoint between i and j while(++i < --j); // there is no body in this loop System.out.println("Midpoint is " + i); } } When the above Java program is compile and executed, it will produce the following output:. Here are the list of some more examples, that you may like: Java Programming Online Test Tools Calculator Quick Links
https://codescracker.com/java/java-while-loop.htm
CC-MAIN-2017-22
refinedweb
337
58.21
Dealing with Request Data¶ The most important rule about web development is “Do not trust the user”. This is especially true for incoming request data on the input stream. With WSGI this is actually a bit harder than you would expect. Because of that Werkzeug wraps the request stream for you to save you from the most prominent problems with it. Missing EOF Marker on Input Stream¶ The input stream has no end-of-file marker. If you would call the read() method on the wsgi.input stream you would cause your application to hang on conforming servers. This is actually intentional however painful. Werkzeug solves that problem by wrapping the input stream in a special LimitedStream. The input stream is exposed on the request objects as stream. This one is either an empty stream (if the form data was parsed) or a limited stream with the contents of the input stream. When does Werkzeug Parse?¶ Werkzeug parses the incoming data under the following situations: - you access either form, files, or streamand the request method was POST or PUT. - if you call parse_form_data(). These calls are not interchangeable. If you invoke parse_form_data() you must not use the request object or at least not the attributes that trigger the parsing process. This is also true if you read from the wsgi.input stream before the parsing. General rule: Leave the WSGI input stream alone. Especially in WSGI middlewares. Use either the parsing functions or the request object. Do not mix multiple WSGI utility libraries for form data parsing or anything else that works on the input stream. How does it Parse?¶ The standard Werkzeug parsing behavior handles three cases: - input content type was multipart/form-data. In this situation the streamwill be empty and formwill contain the regular POST / PUT data, fileswill contain the uploaded files as FileStorageobjects. - input content type was application/x-www-form-urlencoded. Then the streamwill be empty and formwill contain the regular POST / PUT data and fileswill be empty. - the input content type was neither of them, streampoints to a LimitedStreamwith the input data for further processing. Special note on the get_data method: Calling this loads the full request data into memory. This is only safe to do if the max_content_length is set. Also you can either read the stream or call get_data(). Limiting Request Data¶ To avoid being the victim of a DDOS attack you can set the maximum accepted content length and request field sizes. The BaseRequest class has two attributes for that: max_content_length and max_form_memory_size. The first one can be used to limit the total content length. For example by setting it to 1024 * 1024 * 16 the request won’t accept more than 16MB of transmitted data. Because certain data can’t be moved to the hard disk (regular post data) whereas temporary files can, there is a second limit you can set. The max_form_memory_size limits the size of POST transmitted form data. By setting it to 1024 * 1024 * 2 you can make sure that all in memory-stored fields are not more than 2MB in size. This however does not affect in-memory stored files if the stream_factory used returns a in-memory file. How to extend Parsing?¶ Modern web applications transmit a lot more than multipart form data or url encoded data. Extending the parsing capabilities by subclassing the BaseRequest is simple. The following example implements parsing for incoming JSON data: from werkzeug.utils import cached_property from werkzeug.wrappers import Request from simplejson import loads class JSONRequest(Request): # accept up to 4MB of transmitted data. max_content_length = 1024 * 1024 * 4 @cached_property def json(self): if self.headers.get('content-type') == 'application/json': return loads(self.data)
http://werkzeug.pocoo.org/docs/0.12/request_data/
CC-MAIN-2018-13
refinedweb
614
58.79
When creating a template, you frequently need to include a child resource that is related to a parent resource. For example, your template may include a SQL server and a database. The SQL server is the parent resource, and the database is the child resource. The format of the child resource type is: {resource-provider-namespace}/{parent-resource-type}/{child-resource-type} The format of the child resource name is: {parent-resource-name}/{child-resource-name} However, you specify the type and name in a template differently based on whether it is nested within the parent resource, or on its own at the top level. This topic shows how to handle both approaches. Nested child resource The easiest way to define a child resource is to nest it within the parent resource. The following example shows a SQL database nested within in a SQL Server. { "name": "exampleserver", "type": "Microsoft.Sql/servers", "apiVersion": "2014-04-01", ... "resources": [ { "name": "exampledatabase", "type": "databases", "apiVersion": "2014-04-01", ... } ] } For the child resource, the type is set to databases but its full resource type is Microsoft.Sql/servers/databases. You do not provide Microsoft.Sql/servers/ because it is assumed from the parent resource type. The child resource name is set to exampledatabase but the full name includes the parent name. You do not provide exampleserver because it is assumed from the parent resource. Top-level child resource You can define the child resource at the top level. You might use this approach if the parent resource is", ... } The database is a child resource to the server even though they are defined on the same level in the template. Next steps - For recommendations about how to create templates, see Best practices for creating Azure Resource Manager templates. - For an example of creating multiple child resources, see Deploy multiple instances of resources in Azure Resource Manager templates.
https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-template-child-resource
CC-MAIN-2017-22
refinedweb
311
56.96
I have a class from the help of "setrofim". - Code: Select all class Character(object): def __init__(self, race, health, strength, defence, magic, agaility, ranged): self.race = race self.strength = strength self.defence = defence self.magic = magic self.agility = agility self.ranged = ranged characters = [ Character("Org", 100, 10, 10, 5, 8, 5), Character("Elf", 90, 8, 8, 10, 10, 12), Character("Templar", 120, 10, 12, 10, 5, 5), ] But I have no idee on how to use it the way I want. Exsample You get to you your own char with his own stats from another clas with the same format as above called "orace" And now I want to be able to mach the character wich the player chose against one of the npc's(the characters from the above class)agains each other to have a winner but I don't know how to mach them against each other and I don't know how to have constant print on the screen that shows the stats (like the health) that updates itself as the player progresses Any idees on how I woud go about this? Srry if I'm vague on wat I want tryd my best to explain it Thank for reading and help!!!
http://python-forum.org/viewtopic.php?p=5032
CC-MAIN-2015-06
refinedweb
207
77.57
Although we all agree to use 'self' as the "me object" proxy, it's not a keyword, and we could use a different stand in, e.g.: class Human (object): def __init__(me, name): me.name = name def __repr__(me): return 'Hi, my name is %s' % me.name >>> import subgenius >>> aguy = Human('Bob') >>> aguy 'Hi, my name is Bob' It occurs to me that I could test "me" in place of "self" as it's shorter, and as it might encourage a first person identification which I think needs to happen when tackling OO. We explored earlier on this list the difference between CivBuilder 3rd person games, like SimCity and Civilization IV (many others), and 1st person shooters e.g. Quake and Doom. Of course many games give both 1st and 3rd, though we should distinguish between two kinds of 3rd: 3rd as in "I am that character (avatar, action figure or whatever, as in 'Alice')" vs. "I have some god's eye view" (incorporeal flyer, as in Google Earth and most of those WarCrafty type games, also Sims). Likewise, I think when coming to think formally in terms of objects (vs. informally, which begins with the emergence of language), it helps to personally project a "self" into various household objects, houses themselves. Like in the movie 'Cars' we need to *become* a thing, then ask (in the first person): what are my behaviors/methods, what are my attributes/properties? It's a game of "who am I" (or "who I am") and is already a natural feature of childhood play (fantasy self projections). I'm thinking the word 'self', at least in English, is too '3rd person' in some ways, and looking down on a lot of objects, each with a 'self', you have only a god's eye view. However, the grammar around 'me' is different -- there's only one of them (one first person), and therefore thinking "me" promotes a kind of first person instrospective attitude. And we *want* that, as an option, when modeling in OO. So I'd be accomplishing two things in this lesson (involving temporarily substituting "me" for "self" in some class definitions): (a) I'd be communicating the subtle teaching that 'self is not a keyword in Python' and (b) helping with the subliminal process of personally identifying with various objects, in order to become a better object-oriented programmer. At the end of the lesson, I'd reinforce the canonical accepted 'self' (i.e. the god's eye view) as the proper one, but hopefully students would have taken to heart the point of this lesson. Note that I'm not proposing this as a "for kids only" type lesson plan experience. I've been brainstorming a lot about what a Computer Science for Liberal Arts Majors might look like (recent link below), and this whole idea of "point of view" is already standard fare in literature courses, as well as film theory. We can bridge to OO through this "pronouns" discussion. Note about 2nd person: many multi-player Internet games, plus single-user games, do have a "we" concept, i.e. you're a part of a team with a shared objective, up against other players, or up against the computer, as the case may be. Related topic: me.__dict__ is a good intro to the idea of a "personal namespace" in addition to basic Python -- a helpful notion in psychology and diplomacy, where world views may start far apart, but grow closer through growing familiarity with the others' operations. It's in the tradition of Leibniz to want to use some "machine language" as a basis for diplomacy (American Transcendentalism has echoes of that, e.g. in Fuller's 'cosmic computer' meme)). Kirby Liberal Arts Compsci (except selling as Maths in this context):
https://mail.python.org/pipermail/edu-sig/2006-July/006750.html
CC-MAIN-2016-50
refinedweb
634
58.11