text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hello all. I have this assignment for class and am running out of time to do this. It's due tomorrow at 11am and all of the tutors are gone since it's a Friday and I was in class all day. There are two parts to it and it's essentially a grading calculator. The first part, the user has to input grades (integers) and once -1 is inputed, the program is supposed to stop. So I did most of it but now I'm just stuck on where to go and what's wrong. I need the program to print out what is on the bottom of the code. I'm sure you can get the gist of it once you read it. *One of the requirements is to have one scanner in the first one and two scanners in the second. Also, the -1 that is put in can't count towards the grading calculations at the end. Also, for the average, it has to rounded to two decimal places if needed, which is why I added in the decimal format, not sure if that's where it goes but it is needed. In the second one, it's the same objective, but the grades are being read in from a .txt file that has grades inside of it. It has to work for any txt file given to it. Any suggestions on what to do for that code? It's pretty much the exact same... Here's what I have. I know some of it is wrong, but this is all I could do. I also know it's missing things, I'm just not sure what they are. If anyone can help me with this that would be awesome. I'm in desperate need to get this done by tomorrow morning. Thanks, I really hope someone can help me out with this. I really need it. The tutors are not here anymore at my school so I'm kinda desperate at this point.Thanks, I really hope someone can help me out with this. I really need it. The tutors are not here anymore at my school so I'm kinda desperate at this point.Code : import java.util.Scanner; public class Grades { public static void main(String[] args) { double sum = 0; double average = 0; int gradeCount = 0; int highScore = 0; int lowScore = 100; int aCount = 0; int bCount = 0; int cCount = 0; int dCount = 0; int fCount = 0; Scanner in = new Scanner(System.in); System.out.println("Welcome to the Grade Calculator Program"); System.out.println(""); System.out.println("Please enter the student scores as integers between 0 and 100."); System.out.println("Separate the scores with a space."); System.out.println("Enter a negative number following the last student score."); System.out.print("Enter the scores here: "); while (jn.hasNext()) { String line = in.nextLine(); gradeCount ++; sum += grade; average = sum/gradeCount; DecimalFormat fmt = new DecimalFormat ("#.0#"); if (grade>=0) if (grade < lowScore) if (grade > highScore) if (grade<=100)&&(grade>=90) aCount ++; if (grade>=80)&&(grade<=89) bCount ++; if (grade>=70)&&(grade<=79) cCount ++; if (grade>=60)&&(grade<=69) dCount ++; if (grade<=59)&&(grade>=0) fCount ++; if (grade<=0) in.close; } System.out.println("Total number of grades : "+gradeCount); System.out.println("High Score : "+highScore); System.out.println("Low Score : "+lowScore); System.out.println("Average Score : "+average); System.out.println("Number of A's : "+aCount); System.out.println("Number of B's : "+bCount); System.out.println("Number of C's : "+cCount); System.out.println("Number of D's : "+dCount); System.out.println("Number of F's : "+fCount); } } And if I'm satisfied with the help someone gives me and get this to run, I'll send them some paypal money ;) -Peter
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/296-problem-grading-calculator-java-program-printingthethread.html
CC-MAIN-2017-30
refinedweb
626
76.42
span8 span4 span8 span4 I'm using FeatureReader with following parameters: Dataset: $(Source_WFS:) Feature Types To Read – From Attribute: _FETURE_TYPES $(Source_WFS:) is composed of some fixed URL address (below is an example one, not the real one I'm using) plus dynamically generated token by Python script in private parameter called Token which is: import fmeobjects import urllib import urllib2 import json username = FME_MacroValues['Username'] password = FME_MacroValues['Password'] client = FME_MacroValues['Client'] expiration = FME_MacroValues['Expiration'] format = FME_MacroValues['Format'] url = '' values = {'username': username, 'password': password, 'client': client, 'expiration': expiration, 'f': format} data = urllib.urlencode(values) request = urllib2.Request(url, data) response = urllib2.urlopen(request).read() jsonToken = json.loads(response) return jsonToken['token'] This worklfow works perfecty in FME 2014, but in FME 2018 I get the error, when trying to load features to Output ports: Failed to retrieve feature types. Received HTTP response header: 'HTTP/1.1 498 498' from '' I suppose $(Token) value is not generated prior to reading WFS address. How can I bypass this issue in 2018? Minimal workspace attached bellow. Hope it helps. Tested with both FME 2017.1 and 2018.1, but unable to reproduce the issue. You may want to send your workspace to Safe support, complete with logs and a pointer to this thread. Unfortunately FME 2018.1.2 didn't solve the problem. I should send the workspace to Safe suport? Also, try using a ParameterFetcher with a Logger to see what the actual value of the WFS URL parameter is when the workspace is running. It is set to Type: Text. That is really weird. I tried to recreate the issue in FME 2018.1 but it all worked as expected. Are you able to post a minimal workspace here that lets ut reproduce the problem? Unfortunately I can't share this workspace. I'm using FME 2018.0.1.1. Maybe this could be an issue? I agree that trying this with the latest FME 2018 release (1.2.1) would be a good start: What is the parameter type of "Source WFS:"? If it's not already the case, can you try to recreate it as a regular Text parameter. Answers Answers and Comments 16 People are following this question. Found a bug in the WFS reader ? 1 Answer FME desktop won't load data from wfs service. 1 Answer Reading WFS fails in FME 2019, works in FME 2016 2 Answers I am trying to use HTTPCaller to generate a token which can be used in a REST API Service. 5 Answers WFS service expose attributes problem 1 Answer
https://knowledge.safe.com/questions/90246/dynamically-read-wfs-with-token-generation.html
CC-MAIN-2019-35
refinedweb
429
59.7
hi, does anyone know of any good daemon design resources for linux. i'm trying to find out if there are standard methods or templates available. TIA, rotis23 Printable View hi, does anyone know of any good daemon design resources for linux. i'm trying to find out if there are standard methods or templates available. TIA, rotis23 I don't think there's any special method for writing a daemon, it depends on what you want it to do. The best way, I believe, is to fork immediately, then set your process group/create your own session, and your away. Something like Code: switch (pid = fork()) { case -1: perror("fork"); return (EXIT_FAILURE); break; case 0: sleep(1); if( setsid() == -1 ) { perror("setsid"); } printf("Daemon started"); break; default: exit(EXIT_SUCCESS); break; } thanks chaps, the daemon needs to do some processing every 5 minutes. so do some processsing and sleep for x amount of time. i'm really interested in how the daemon announces itself to the linux environment (start-up,allocation of pid etc) and how the linux environment interacts with the daemon (signals, kill etc, greaceful termination). thanks Hammer, but their must be some linux docs on how this should be done properly. TIA, rotis23 ok, just to inform on my research. the best way to construct a daemon is as follows. if there are any errors, please inform: 1) fork and setsid (as per Hammer) - hammer, could you tell me why its important to fork - i can't find any info on this? 2) set going in a loop, or listening for events 3) add capability to handle signals, SIGTERM, SIGINT (also good to handle SIGSEGV). When these signals are made, the daemon can shutdown gracefully - close db connections, write log etc. 4) init scripts can then be created using: - daemon process (to start - run in background etc) - killproc process (to stop - sending signal) and that's it. have i missed anything? rotis23 >>hammer, could you tell me why its important to fork It helps split away from the parent in a clean manner. Within the program that forks, the parent part terminates immediately, thus telling the calling process that it's all done. The child goes on living, without a parent (actually it gets adopted by init, if I remember rightly). And yes, you can use signals to help control the program. For example, I've user SIGUSR1 to tell the program to dump some stats about it's uptime and what it's currently doing etc. thanks hammer! just some more info i can't find anywhere! what is the difference - in terms of the effect on the process of the following signals: SIGINT (kill -2) - Interrupt SIGTERM (kill -15) - Terminate SIGKILL (kill -9) - cannot handle SIGHUP (kill -1) - Hangup I know that SIGKILL is the one that is not handled and stops a process dead. I just want to know the difference between the other three. TIA, rotis23 SIGHUP Hangup. Sent when a terminal is hung up to every process for which it is the control terminal. Also sent to each process in a process group when the group leader terminates for any reason. This simulates hanging up on terminals that can't be physically hung up, such as a personal computer. SIGINT Interrupt. Sent to every process associated with a control terminal when the interrupt key (Control-C) is hit. This action of the interrupt key may be suppressed or the interrupt key may be changed using the stty command. Note that suppressing the interrupt key is completely different from ignoring the signal, although the effect (or lack of it) on the process is the same. SIGTERM Sofftware termination. The standard termination signal. It's the default signal sent by the kill command, and is also used during system shutdown to terminate all active processes. A program should be coded to either let this signal default or else to clean up quickly (e.g., remove temporary files) and call exit More here. thanks
http://cboard.cprogramming.com/c-programming/24552-linux-daemon-design-printable-thread.html
CC-MAIN-2016-36
refinedweb
669
70.73
Search Articles FAQs All Questions New Question ASP.NET - slideshow automatic update - Asked By kiran Kumar on 13-Mar-12 02:37 AM hi, in my asp web form default2.aspx, i have placed one image control and attached the extender Slide show extender control to the image control I have placed one script manager also I have placed one folder in my website with the name "images" The images folder consists of the images which i have uploaded using FileUploading control which i have placed in Default3.aspx For that i wrote the code and the images are uploading in to images folder using File Upload Now, i want to show those images which the images folder consists of and after it get updated with new images also. And the images will be displayed in slide show. How to do this?? [)ia6l0 iii replied to kiran Kumar on 13-Mar-12 09:22 PM There is no speical coding required to do this, if you are already reading the images from a folder. It will automatically get updated. If you have a slideshowextender that has the SlideShowService method set to one of the webmethods, like following, then you need to ensure that you are reading the images from the very same folder and building the slides. <ajaxToolkit:SlideShowExtender And perhaps, you have the GetSlidesFromFolder definition as follows: public static AjaxControlToolkit.Slide[] GetSlidesFromFolder() { //Do a server.mappath to the images folder and find out how many images you have. string[] images = System.IO.Directory.GetFiles(Server.MapPath("Images")); //Based on the image count, create the slide array. AjaxControlToolkit.Slide[] slides = new AjaxControlToolkit.Slide[images.length]; //Loop thru the images and create slides for (int counter = 0; counter < images.Length; counter++) { //Use System.IO.File.GetFilename to get the file name. slides[counter] = new AjaxControlToolkit.Slide("images/" + System.IO.File.GetFileName(file)); } / /return the images as slides. return slides; } Hope this helps. kiran Kumar replied to [)ia6l0 iii on 14-Mar-12 01:38 AM it is not working yar.... [)ia6l0 iii replied to kiran Kumar on 14-Mar-12 01:44 AM What is "yar"? And be specific when you say "it is not working". Let all know - what is the actual problem? A problem well stated is a problem half solved. - Charles Kettering kiran Kumar replied to [)ia6l0 iii on 14-Mar-12 01:49 AM i wrote the code as you said.. it is showing error messages: System.IO.File does not contain a definition for 'GetFileName' and System.IO.File is a type but is used like a 'variable' [)ia6l0 iii replied to kiran Kumar on 14-Mar-12 02:07 AM Good. You have moved forward. Pardon me for the mistake on the code. Get into a habit of reading MSDN and debugging your code. No one is going to do that for you. The Filename method was not on the System.IO.File. It is on the Path class. Please read the documentation link posted above. It shows you the exact example of how to use this method. And I have another advice for you. Don't "develop by Google" unless you know what the code is doing line by line. Somesh Yadav replied to kiran Kumar on 14-Mar-12 07:18 AM Hi kiran follow the below steps:- Step 1 - On the top of your aspx file you have to add reference to the Ajax Control Toolkit: view plaincopy to clipboardprint? <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="ajaxToolkit" %> Step 2 - Now you have to add Toolkit Script Manager, in order to do so add following code within <form> element: view plaincopy to clipboardprint? <ajaxToolkit:ToolkitScriptManager </ajaxToolkit:ToolkitScriptManager> Step 3 - Then you should add to your page Image control where all your photos will be displayed. Under it place 3 buttons (for going forward, backward and for play/pause). Text for play/pause button will be changed by the extender itself depending on the current state of the slide show. You can add all items manually, dragging them from the toolbox or you can just paste following code to your aspx file. view plaincopy to clipboardprint? <asp:Image <div style="text-align:center"> <asp:Button <asp:Button <asp:Button </div> Step 4 - Now we'll add SlideShow extender to our code. Firstly we'll have to specify in TargetControlID id of the control where the images will be displayed. SlideShowServiceMethod sets the method which will be used to fetch images and SlideShowServicePath sets a path to a webservice. Then you have to provide IDs of the buttons. PlayButtonText and StopButtonText contain text that will be displayed on a play/pause button when the slide show is running or stopped. Finally AutoPlay option starts slide show automatically when the page is opened. view plaincopy to clipboardprint? <ajaxToolkit:SlideShowExtender </ajaxToolkit:SlideShowExtender> Step 5 - Creating a webservice. Right-click on your project name in Solution Explorer and select Add New Item. Then find Web Service on the list and click Add button. This will add a new asmx file to your project. Remember to uncomment following line to allow Web Service to be called from script: view plaincopy to clipboardprint? [System.Web.Script.Services.ScriptService] Now all you have to do is add following method to your new Web Service: view plaincopy to clipboardprint? [System.Web.Services.WebMethod] [System.Web.Script.Services.ScriptMethod] public AjaxControlToolkit.Slide[] GetSlides() { AjaxControlToolkit.Slide[] slides = new AjaxControlToolkit.Slide[3]; slides[0] = new AjaxControlToolkit.Slide("1.jpg", "Image1", "house"); slides[1] = new AjaxControlToolkit.Slide("2.jpg", "Image2", "seaside"); slides[2] = new AjaxControlToolkit.Slide("3.jpg", "Image3", "car"); return (slides); } Your slide show is ready! Of course you can add more photos to it.
http://www.nullskull.com/q/10430965/slideshow-automatic-update.aspx
CC-MAIN-2014-52
refinedweb
950
67.25
Contributed by Gert C. Gustedt, Technical Content Developer and former Support Engineer specializing in RAS Microsoft Enterprise Systems Support AT A GLANCE Key Point: Providing detailed X.25 information on all RAS versions for planners, implementers, and troubleshooters Detail: High Task: Evaluation, planning, troubleshooting Article Section What's There Introduction This paper consolidates information from many sources including information based on the author's own troubleshooting experience. The goal is to help the planner and support engineer with RAS and X.25. It mentions that Pad.inf and Switch.inf script languages are nearly the same and that the new script language in Windows NT 4.0 and Windows 95 is not supported with X.25. List of MS RAS Versions Lists all RAS versions to date (March 1997) General X.25 Information Gives an overview of X.25 and RAS, reasons to use X.25 versus ISDN, including how many X.25 connections a RAS server can support simultaneously. Planning an X.25 RAS Client To Server Dial-Up or Direct Connection Discusses X.25-related features and limitations of the different RAS versions, including which versions can become RAS servers, what dial-up scripts exist and which third-party X.25 products are supported. Implementing an X.25 RAS Client-To-Server Dial-Up or Direct Connection Discusses the RAS after-dialing Terminal screen and how to configure RAS for existing dial-up scripts and how to write new dial-up scripts, if necessary. Explains how to configure direct-connection clients and how to configure RAS servers. Further RAS X.3 specifications and X.28 PAD commands are discussed. Troubleshooting an X.25 RAS Client-To-Server Dial-Up or Direct-Connection Gives and understanding of what connections the RAS client makes on the way to connecting to the RAS server. Discusses frequent problem areas, including debugging dial-up scripts, how to enable logging of communication between RAS and the PAD and gives some tips on how to troubleshoot the server. It then gives a few problem scenarios and solutions of transmission reliability problems. Appendix A: Windows NT 4.0 Networking Supplement Manual RAS X.25 Chapter Contains chapter 9 of the Windows NT 4.0 Supplement Manual covering RAS X.25. Appendix B: General X.3 Parameter Description Gives a more detailed description (than in the RAS X.3 specifications) of the function of each X.3 parameter. Appendix C: Using Pad.inf with Non-X.25 RAS Connections Discusses the advantages of Pad.inf versus Switch.inf. Appendix D: List of Electronic and Hardcopy RAS Documentation in the Different RAS Versions Gives a list of documentation available to help you identify if you have all documentation available for a certain RAS version and where to look for it. Appendix E: Other RAS Sources Gives pointers to the Microsoft Knowledge Base and TechNet and the Microsoft web site. Appendix F: The New Script Language for Windows 95 and Windows NT 4.0 Contains a copy of the Script.doc file included in the Windows NT 4.0 %SystemRoot%\System32\RAS directory and available with Windows 95 Service Pack 1. Appendix G: Troubleshooting RAS 1.0 and 1.1 on a MS OS/2 1.3 Server Running MS LAN Manager Has a copy of two Microsoft Knowledge Base articles that focus on troubleshooting a computer running MS OS/2 1.3 with LAN Manager 2.1 or later and RAS 1.0 or 1.1 installed. Legal Disclaimers Contains all the legal disclaimers pertaining to this document. In many organizations, computers run different versions of Remote Access Service (RAS). This makes the job harder for the planner and support engineer because it requires knowing all about all the versions, and this material is scattered through the manuals, the help and Readme files, the Microsoft (MS) Knowledge Base, the Pad.inf and Switch.inf files, and the Script.doc file (Windows 95 and Windows NT 4.0). This article is designed to be a Remote Access Service X.25 reference that addresses common network questions related to all versions of RAS current at the time of writing (March 1997). To help its target audience (CIO, MIS, or help desk employees) it incorporates the most useful information and provides other helpful material that cannot be found even in the usual sources. Even though this paper focuses on X.25 issues, many sections relating to writing and troubleshooting Pad.inf scripts also apply to Switch.inf scripts in RAS 1.x, Windows for Workgroups 3.11 and Windows NT 3.x because the script language is the same. (For more information on writing Switch.inf scripts see the MS Knowledge Base or the RAS online help.) For convenience and to assist with planning, a copy of the Windows NT 4.0, Networking Supplement, chapter 9, titled X.25 PAD Support is included. Appendix E lists manuals (including on-line sources, when possible) for the different RAS versions. Microsoft recommends that you refer to the manuals that pertain to your RAS versions in addition to this article. Not covered in-depth is the new script language for Windows 95 and Windows NT 4.0. Windows 95 has not been tested with X.25 dial-up, though it may work, and Windows NT does not support the use of the new script language inside the Pad.inf file. (However, Windows NT supports the new script language in regular RAS script files for PPP and Slip dial-up which is not the topic of this article.) The new script language is more intuitive, very similar to BASIC and C, and is well described in Script.doc, which you can find in Appendix F. Three example script files are included with Windows NT 4.0: Pppmenu.scp, Slip.scp, Slipmenu.scp. These three files and Script.doc are in the Windows NT 4.0 %SystemRoot%\System32\RAS directory. Troubleshooting RAS 1.0 and 1.1 for Microsoft OS/2 version 1.3 is mainly covered in Appendix G because most OS/2 implementations have been upgraded to Windows NT. For more information on RAS for Microsoft OS/2 (v. 1.3) versions 1.0 and 1.1 see the Microsoft Knowledge Base. Microsoft Remote Access for MS-DOS, versions 1.0, 1.1, and 1.1a Microsoft Remote Access for OS/2 (v. 1.3), versions 1.0 and 1.1 Microsoft Windows for Workgroups version 3.11 Microsoft Windows 95 Microsoft Windows NT operating system version 3.1 Microsoft Windows NT Advanced Server version 3.1 Microsoft Windows NT Workstation versions 3.5, 3.51, and 4.0 Microsoft Windows NT Server versions 3.5, 3.51, and 4.0 An X.25 network transmits data between two computers with a packet-switching protocol. This protocol relies on an elaborate worldwide network of packet-forwarding nodes (DCEs) that can deliver an X.25 packet to its designated address. X.25 also requires additional hardware such as an X.25 Smart Card or a PAD. For additional information on hardware requirements, see "What Third-Party Products Are Required to Set Up a RAS Connection over X.25" below and the Windows NT Server 3.5 and 3.51 Remote Access Service manual, chapter 6, or the Windows NT Server 4.0 Networking Supplement manual, chapter 9. Connecting to a server through an X.25 network is similar to connecting through a phone line. The only difference is that the phone book entry must specify an X.25 PAD type and an X.121 address for the RAS server. RAS does not know what medium it is running over. It does not know about the X.25 protocol, just as it does not know about how phone lines and modem equipment work on the lower levels. The RAS server uses the Eicon Technology WAN Services Eicon drivers and an internal Eicon X.25 adapter to convert the X.25 protocol to the serial (RS232) protocol signals (and vice versa), or it can send and receive serial signals to and from an external X.25 PAD (packet assembler/disassembler), in which case, no Eicon software and hardware is necessary because the X.25 PAD does the protocol conversion. Some RAS client versions can also be configured with the Eicon driver and adapter or the external PAD, but usually the clients use a modem to call the X.25 network provider dial-up PAD, which is also a modem. After the RAS client modem and the X.25 provider dial-up PAD connect, the X.25 provider usually requires callers to identify themselves for billing purposes. To support caller identification, most RAS clients can run a customized command script, and some can also go into an interactive post-connect Terminal mode, to allow the client to send the user name and password. Note: The external PAD configuration mentioned above is not supported in Windows for Workgroups 3.11, Windows 95, and Windows NT Workstation and Server versions 3.5, 3.51, and 4.0, because they have not been tested. However, there are no known reasons why this external configuration should not work. Note: You need to purchase an Eicon X.25 adapter card and the specific Eicon WAN Services for your operating system. Since Windows NT version 4.0 the "Eicon WAN Adapters" card driver ships with Windows NT. In addition to transmitting data more reliably than regular phone lines, X.25 connections supply bandwidths of up to 56 kilobytes (K) (64K in Europe). How Many X.25 Connections Are Supported by RAS Servers A Windows NT 3.1, 3.5, 3.51, or 4.0 RAS server with Eicon Technologies software and Smart Card installed can host up to 256 RAS client connections over one X.25 line simultaneously. This compares favorably to the expense and space needs of having to purchase and maintain 256 modems and phone lines for the RAS server for regular telephone line connections not using X.25. RAS 1.1 for OS/2 (OS/2 version 1.3), version 2.1 or later only supports 13 simultaneous RAS client connections over one X.25 line. X.25 Versus ISDN Note: In areas where ISDN is available, Microsoft recommends considering the use of ISDN rather than X.25. ISDN is much faster (128kb or more depending on the type of adapter and provider network) without compromising reliability, however, it is likely to be more expensive. For additional information on ISDN and RAS, see the Windows NT 3.5 Server "Remote Access Service" manual, page 8, or the Windows NT 3.5 RAS online Help. RAS 1.0 Microsoft RAS version 1.0 does not have the capability to invoke RAS Terminal or use scripts in .INF files. It does not support X.25. Upgrades to RAS 1.1 and RAS 1.1a are free. RAS 1.1 and 1.1a RAS 1.1 is the first RAS version to support X.25 with a Pad.inf file, however, the syntax in Pad.inf scripts is different from the syntax used in later RAS versions. For more information, see your RAS version 1.1x Pad.inf file, RAS manual, and release notes. RAS 1.1x also does not support a RAS Terminal screen (nor the capability to run scripts from a Switch.inf file for use with intermediary security devices). The latter two features were introduced with RAS for Windows For Workgroups 3.11. Windows 95 Windows 95 was not tested with X.25 RAS Dial-Up Networking and therefore is not supported as a RAS X.25 client, however, it may work. If you want to try RAS X.25 on Windows 95 keep in mind that it does not support the script language used in the Pad.inf and Switch.inf files and that the Pad.inf and Switch.inf files do not exist and cannot be invoked in Windows 95. To get support for scripting in non-X.25 environments (for example for Slip or PPP Internet access) in Windows 95, obtain Windows 95 Service Pack 1 (SP1). In the Admin directory of SP1 you can find the scripting tool Script.exe. The Admin directory also contains the Script.doc file that describes the commands and syntax of that new and improved script language which is, however, incompatible with the Pad.inf and Switch.inf script language. Windows 95 supports an after-dialing Terminal window for connections to non-X.25 providers. We have received unconfirmed reports from customers attempting to use Windows 95 with X.25 dial-up that there are problems of receiving incomplete text from the provider PAD in the Terminal window. Note: Windows NT 4.0 supports the script language used in Pad.inf and Switch.inf as well as the new Windows 95 script language. RAS for Windows for Workgroups 3.11 and Windows NT 3.1, 3.5, 3.51, and 4.0 support RAS Terminal screens for X.25 dial-up. Eicon Technologies supports Windows 95 as a non-RAS X.25 client with the OSI PCGATEWAY software which does not allow connections to a Windows NT RAS server. RAS 1.1x for LAN Manager 2.1 or later (MS-DOS-based or Windows-based) RAS 1.1 for LAN Manager for OS/2 (v. 1.3), version 2.1 and later RAS for Windows for Workgroups 3.11 RAS for Windows NT Workstation and Server 3.1, 3.5, 3.51, and 4.0 Note: The Dial-up Networking client (DUN) for Windows 95 has not been tested as an X.25 dial-up client and is therefore not supported in that function, but may work. The following RAS versions can assume the RAS client or RAS server roll: RAS 1.1 for LAN Manager for OS/2 (OS/2 v. 1.3), version 2.1 and later To make Windows NT work with an X.25 SmartCard, install the driver and the X.25 SmartCard from Eicon Technologies in your computer. Windows NT 4.0 ships with the Eicon X.25 driver. The driver is called "Eicon WAN Adapters." You do not need to purchase additional software if you have purchased an adapter that is supported by this driver. See "What Third-Party Products Are Required to Set Up a RAS Connection over X.25" below. The RAS versions that can become RAS servers are identical to the ones listed above under " RAS Clients Accessing X.25 Through an X.25 Smart Card." RAS and External X.25 PADs - Supported Only in Windows NT 3.1 RAS for Windows NT 3.1 was the only RAS version tested with an external X.25 PAD (see RAS manual page 34, 37 Figure 6.3). Other versions of RAS may work with an external X.25 PAD, but are not supported in that configuration. These scripts allow an automatic connection of your RAS client with the RAS server over X.25 dial-up: RAS 1.1, Windows for Workgroups 3.11, and Windows NT 3.1 include the following scripts: Sprintnet (9600 bps and 2400 bps) InfoNet (9600 bps and 2400 bps) The following scripts are included with Windows NT 3.5, 3.51, and 4.0, but should work with Windows NT 3.1, too. Compuserve SITA Group Network Alascom/Tymnet/MCI Telematics Note: These scripts are not required to connect. You can also connect manually by using a RAS Terminal window and typing the required parameters, user name and password. For X.25 providers that are not listed here, you can write your own script or connect manually. To write your own script, see the section titled "Writing a Pad.inf script to automate Remote RAS Logons." In case there were updates to Pad.inf after this document is published, check the Pad.inf file that is installed with RAS automatically to find out for which X.25 carriers your current RAS version includes scripts. The section title (enclosed in square brackets [ ]) for each script usually indicates the name of the X.25 carrier for which the script was designed, except the section titled Eicon X.PAD which is the X.25 Eicon card script for Windows NT servers and clients that have an Eicon card installed. Only X.25 cards by Eicon Technologies are supported for RAS over X.25. See the following paragraph for more information on Eicon products. There maybe X.25 cards and drivers available by other companies, however, they have not been tested by Microsoft. Please contact that vendor for support. Microsoft only supports Eicon Technologies hardware and software with RAS over X.25. You only need Eicon products for your RAS X.25 server and RAS X.25 direct-connection clients. RAS Dial-Up clients do not need any additional software or adapters. Software To get X.25 support on your Windows NT 3.1, 3.5, or 3.51 server you need to purchase the corresponding version WAN Services for Windows NT from Eicon Technologies. However, in Windows NT 4.0 you do not need to purchase Wan Services For Windows NT (it does not exist). Windows NT 4.0 already ships with the Eicon driver for Eicon X.25 adapters. You can install the adapter driver by running Control Panel and choosing Network. The adapter driver name is Eicon WAN Adapter. Driver Versions and Adapter Support For Windows NT 3.51 obtain Eicon Wan Services version 3, release 4a (V3R4A). This version works only on Windows NT 3.51, not on Windows NT 4.0. It supports the following Eicon X.25 adapters: PC, HSI, DPNA, MPNA, C21, and S51 There is a newer version of Wan Services for Windows NT 3.51 known as version 3, release 4c (V3R4C). This version works only on Windows NT 3.51 but has additional adapter support as follows: S52, S50, C20 The Windows NT 4.0 Eicon Wan Adapter driver supports the following adapters: C21, S51, PC, HSI, DPNA, and MPNA. For a complete up-to-date list contact Eicon Technologies. Note: At the time of this writing (December 1996) Eicon Technologies stated it was close to releasing a new X.25 Windows NT Server product called Connection For Windows NT 4.0. This software has support for new and faster X.25 adapters and is said to support software compression and other new features. To find out more about this new product contact Eicon Technologies. Contact information is provided below. Per Eicon Technologies, Connection For Windows NT 4.0 supports the following adapters: All cards the Eicon Wan Adapter in Windows NT 4.0 supports. New: C20, S50, S52, and S94. Note: The S-series of adapters, that is the adapters with an "S" in the model name, are generally faster (S stands for Server) than those with a "C" in the model name ("C" stands for Client) which are designed to be for X.25 clients which do not require as much speed. You can contact Eicon Technologies at: USA: (214) 239-3270 Canada: (514) 745-5500 Internet: On CompuServe, type: go eicon The following is recommended on all workstations that are going to access a dial-up PAD to connect through X.25 to a RAS server using an Eicon card. Establish all modem connections using a reliable link (either V.42, LAPM, or MNP4) and enable hardware handshaking between your local modem and the workstation. Enable these settings by editing the Remote Access entry's modem settings. Select the check box for modem error correction and the check box for hardware flow control. Using this feature ensures that there is end-to-end flow control and no data will be lost between the dial-up PAD and the client workstation. You may encounter problems unless these modem settings are made. If RAS does not have an X.25 logon script for your particular X.25 provider (see above "For Which X.25 Providers Exist RAS Dial-Up Scripts in the Pad.inf File"), you need to write a script or configure RAS to pop up the interactive RAS Terminal screen after dialing to display logon prompts from X.25 providers and allow you to type your logon credentials and other parameters. For more information on writing your own X.25 script see "Creating Scripts for the RAS Pad.inf File" below. RAS 1.x Clients do not have support for a RAS Terminal screen. To configure a Windows NT RAS 3.1, 3.5, or 3.51 entry to use RAS Terminal after dialing: In Remote Access, select an entry. Choose Edit. If the Security button is not available, choose the Advanced button. Choose Security. (In Windows NT 3.1 and Windows for Workgroups 3.11 this button is labeled Switch). In the After Dialing field, select Terminal. (In Windows NT 3.1 and Windows for Workgroups 3.11 this is labeled Post-Connect). Choose the OK button until you return to the main Remote Access screen. To configure a Windows NT RAS 4.0 entry to use Terminal after dialing: On the Windows NT 4.0 desktop, double-click My Computer and then Dial-Up Networking. In Dial-Up Networking, select a Phonebook entry and then click More and choose Edit entry and modem properties. In the Script tab under After Dialing (Login), click on Pop Up A Terminal Window. To configure a Windows 95 Dial-Up entry to use Terminal after dialing: On your Desktop double-click My Computer. Double-click Dial-Up Networking Double-click Make New Connection and click Configure. Click the Options tab and click the Bring Up Terminal Window After Dialing check box. You can configure a RAS entry to run a Pad.inf script after dialing. For example, to automate the task of logging onto a remote host, create the script in the Pad.inf file and then configure the RAS entry to use the created script after dialing. The following steps allow you to connect to an X.25 provider through Windows for Workgroups RAS dial-up. These instructions assume that a dial-up script exists for your X.25 provider: In Remote Access, do one of the following: If you need to add a new RAS Phonebook entry, choose Add from the toolbar. -or- If you want to edit an existing RAS Phonebook entry, choose Edit from the toolbar. In the Phone Book Entry dialog box, if the Advanced button is displayed, select Advanced, otherwise go to step 3. In the Port field, select the COM port your modem is connected to. Note: Do not select the "Any X.25 port" option in the Port drop down list unless you are connecting through an Eicon X.25 card instead of a modem. At the bottom of the Advanced Phone Book Entry dialog box, select X.25. Fill in the fields in the X.25 dialog box: or X.25, the User Data field is usually left blank unless additional connection information is required such as a user name. Note: When using a Pad.inf script the "PAD Type" and "X.121 Address" are mandatory settings. The "User Data" and "Facilities" fields are only used with certain scripts where additional information is required by the X.25 provider. For more information, see the RAS Online Help under "Setting X.25 Parameters." Activating an X.25 Script in Switch.inf instead of Pad.inf in Windows for Workgroups 3.11 RAS In Windows for Workgroups 3.11 RAS, if you have a problem of Pad.inf script files not executing completely, you can try running your X.25 script from Switch.inf after replacing all references to the special Pad.inf macros X25address, Userdata, and Facilities (if applicable) with the actual values because Switch.inf does not support these macros. You can configure a RAS entry to run a Switch.inf script before dialing, after dialing, or both. For example, to automate the task of logging onto a dial-up PAD, create the script in the Switch.inf file and then configure the RAS entry to use the created script after dialing. To activate a Switch.inf script in Windows for Workgroups version 3.11: Run Remote Access and select an entry. Choose the Edit button. Choose the Security button. (In Windows NT 3.1 and Windows for Workgroups 3.11 the button is labeled Switch). In the After Dialing box, select the name of the script. The section header in the Switch.inf file is what appears as the name of the script. (In Windows NT 3.1 and Windows for Workgroups 3.11 this box is labeled Post-Connect). Choose the OK button until you return to the main Remote Access Screen. When you dial this entry, the selected script runs after RAS dials and connects to the remote host. On a direct connect (via an Eicon card) X.25 RAS client you can configure a RAS entry to run a Pad.inf script to call the X.25 RAS server. In most cases. The following steps allow you to connect to an X.25 provider through a Windows NT direct X.25 connection. These instructions assume that a script exists for your X.25 provider: In the Port field, select an X.25 port which usually appears as a COM port with a high number such as COM3 or COM10 or select "Any X.25 port." for X.25, the User Data field is usually left blank unless additional connection information is required such as a username. These instructions apply to RAS Direct-Connection or Dial-Up X.25 clients: In Dial-Up Networking, select a phonebook entry and then click More and choose Edit entry and modem properties. In the Basic tab under Dial Using, select the X.25 card if you have a direct X.25 connection, or, select the COM port your modem is on, if you are using a dial-up X.25 connection. If you need to configure the entry, click on Configure. In the X.25 tab, select your X.25 provider dial-up provider (or Eicon X.PAD for direct connections) and type the X.25 address of the remote server. In most cases of direct-connect RAS X.25 client configurations. Type additional information in the User Data and Facilities boxes if the Userdata and Facilities variables are used in the script you selected. To install RAS, run Control Panel and choose Network. In Windows NT 3.x choose Add Software. In Windows NT 4.0 click the Services tab and click Add. Then add the Remote Access service. To configure your RAS server for an X.25 RAS network, follow the installation instruction for the X.25 adapter and software. As a rule of thumb use the defaults, for example, leave the X.25 COM port (e.g.: COM10) at the default unless you are absolutely sure another COM port number has to be used. Also for X.25 Buffering: On each communication port in the Eicon PAD configuration, it is recommended that the packet length supported be left at the default of 128. This will give optimum performance on the server. Otherwise, connection problems may occur. Windows NT 4.0 RAS Server To configure RAS on Windows NT 4.0 for the X.25 adapter: Install the Eicon adapter in your computer according to your computer manufacturer and Eicon adapter installation instructions. Install the Eicon driver by running Control Panel, clicking Network, then Adapters, and then Add and adding the Eicon Wan Adaters driver. Configure the driver according to your Eicon WAN Services for Windows NT System Guide or in online Help "Help For WAN Services Configuration" on how to configure the total number of virtual circuits. The sum of the two-way virtual circuits (TVC) and incoming virtual circuits (IVC) in the X.25 configuration screen must equal the number of incoming X.25 clients the server will support at one time. You may have to find out how many TVCs and IVCs your X.25 line has by contacting your X.25 vendor. Define the number of communication ports to be available for RAS in the XPAD configuration program. Choose the Network option in Control Panel. Click the Services tab. If didn't chose to install the Eicon X.PAD services during the Eicon card installation in step1, click the Add button and install the "Eicon X.PAD Driver" and follow the instructions for rebooting. Restart Windows NT and Control Panel Network and click the Services tab. Click the Eicon X.PAD Driver in the Network Services list box and choose Properties. Configure the total number of COM ports by selecting the COM ports from the Available Ports list and then clicking Add. It is recommended that the number of communication ports should be equal to the number of virtual circuits (TVC+IVC) configured. Click Configure Ports and make sure that the Local Profile and Remote Profile name is RAS. Make changes to the fields in this screen only if you know they are necessary. The default values work with most X.25 networks and RAS. Configure the number of communication ports (Eicon XPAD's) in RAS using the Network option in Control Panel. Run Control Panel and click Network again. Click the Services tab, then click Remote Access Service. Click Properties, click Add, and then click Install X.25 Pad. Add an X.25 port to RAS by specifying a port name. Take the default or first available port first to avoid connection problems due to unexpected configurations. Specify an X.25 PAD and click OK. If you do not see the PAD name you want in the Choose X.25 PAD Name box, you can edit the PAD names in Pad.inf. For more information, see the section on X.25 PAD Support in the Windows NT Server Networking Supplement, below in appendix A. Make sure the selected ports are configured for dial-in. Windows NT 3.x RAS Server When configuring a RAS Server to use X.25 over an EiconCard, several steps must be followed to define the number of clients that can connect to the server. Define the total number of virtual circuits that the EiconCard will be configured for. In the Network Settings dialog box, select the EiconCard driver in the Installed Adapter Cards box. Choose the Configure button. Follow the instructions in your Eicon WAN Services for Windows NT System Guide on how to configure the total number of virtual circuits. In the Network Settings dialog box, select the Eicon X.PAD Driver in the Installed Network Software box. Configure the total number of COM ports by selecting the COM ports from the Available Ports list and then choosing the Add button. Configure the number of communication ports (Eicon XPAD's) in RAS using the Network option in Control Panel. Make sure the selected ports are configured for dial-in. Configuring the X.25 RAS Server Eicon card to Configure the Client Dial-Up PAD The client dial-up described in "X.3 RAS Specifications and Potential Problems" Table 9.3 as soon as a connection is established through X.29 commands. To configure an X.25 smart card to make these changes, see the configuration manual for your specific card. Note: If the X.25 smart card on the server end does not support commands for the X.29 language, the client PAD script must set the X.3 parameters locally. If you have problems, contact the support site for your X.25 smart card vendor. Server Bandwidth and the Total Number of Clients To obtain maximum performance in the RAS clients and to ensure reliable connections, ensure that the aggregate throughput of all clients does not exceed the bandwidth of the RAS server. For example, four clients running at 2400 bits per second (bps) can be connected to a server with a 9600 bps X.25 line. However, attaching a fifth client at 2400 bps will exceed the server's bandwidth. This will cause all clients to operate at speeds below 2400 bps. If a virtual circuit, communication port, and RAS port are defined for five ports, then five clients can connect using X.25. However, connecting five clients at one time is not recommended since the throughput of each client will be very low and will cause time-outs in the network protocols running over RAS. When using a null-modem connection on X.25 networks, the server X.25 port must be set to DCE and the client should be set to DTE. If the port on both computers is set to DTE, you cannot connect. The X.25 null modem should be configured for DCE and internal clocking before an X.25 null-modem client can connect. To configure the X.25 null modem, in Control Panel choose Network. In the Network Settings dialog box, choose the X.25 card driver in the list of adapters, then choose Configure. Select the null modem port, choose X.25 and change the node type to DCE. Select X.25 and set the clocking to Internal. Save the configuration and restart the system. On an X.25 network, a X.28 Packet Assembler/Disassembler (PAD) converts serial signals from the RAS client modem to and from X.25 packet-switched signals for communication with the X.25 host, for example, a RAS server with Eicon WAN services installed. A PAD is similar to a modem in that it has a command mode and a data transfer mode. In command mode the user can issue X.3 commands to the PAD. In data transfer mode the PAD forwards data between the client and the server which can be any equipment that complies with the X.25 standards like Unix TTY hardware and a Unix host or a RAS client and a RAS server with Eicon WAN Services installed. The host must comply with the X.29 standard , which allows the host to configure the dial-up PAD's X.3 parameters remotely. Therefore, the PAD X.3 parameters can be set by the client or by the server. The client or the server can invoke predefined purpose-specific Profiles that configure all X.3 parameters at once. When writing a new RAS Pad.inf script, it can sometimes save code to invoke a Profile first and then set only a couple parameters with the Set command rather than having to use the Set command to set all X.3 parameters individually. These X.3 parameters configure the dial-up PAD so that it produces a data stream of required characteristics similar to serial communication where Parity, Data Bits, and Stop Bits are configured. For RAS to work over X.25 the X.3 parameters must be set according to the RAS specifications mentioned below under "X.3 RAS Specifications and Potential Problems." The X.3 parameters can be set by the RAS client or the WAN services on the X.25 RAS server (the host) depending on the X.25 provider network functionality. The X.29 standard allows the host to modify the dial-up PAD X.3 parameters over the X.25 network, however, in most RAS Pad.inf scripts, the RAS client sets the X.3 parameters. The RAS client can set the X.3 parameters either in a Terminal window or usually inside a script (see Pad.inf). Some X.25 network providers use additional proprietary parameters that are extensions to the standard X.3 set of 22 parameters. For more information on X.3 extionsions, see "X.3 RAS Specifications and Potential Problems" below. Therefore, for RAS X.25 set up and troubleshooting it is important to verify that all X.3 parameters are set correctly. If problems occur they can be caused by setting the X.3 parameters correctly, but failing to configure the X.29 parameters which can override the X.3 parameters or vice versa. Important X.28 Commands The RAS client can issue the following commands to the PAD when the PAD is in command mode: Set <X.3 parameter A>:<value> [,<X.3 parameter B>:<value>,<X.3 parameter C>:<value>,…] Sets the PAD X.3 configuration to define the form of the data stream. Multiple parameter/value pairs can be listed on the same line if each pair is separated by a comma. Par? Displays the current settings of all X.3 parameters. This is an important command for verifying X.3 parameter settings during RAS X.25 setup and troubleshooting. PROF (identifier) Invokes a predefined set of values for all or a number of X.3 parameters to prepare the data stream for different types of connections eliminating the need to set X.3 parameters individually. PRF? Displays the current profile settings. This is an important command for verifying X.3 parameters during setup troubleshooting. This command may not work on all X.25 networks. RESET Resets all X.3 parameters back to their default settings. Default settings vary according to vendor equipment used on the particular X.25 network. Contact your X.25 network provider for other commands, that the RAS client needs to issue before calling the RAS server. For example, on SprintNet the @ sign configures the PAD to use 8 data bits, the letter D requests 9600 bits per second (bps) communication. Important X.28 Service Signals The following are X.28 service signals that a RAS client or Windows Terminal client receives when making a call to the X.121 address of the RAS server X.25 host. One of these signals should appear in the Terminal window or the Device.log file if logging is turned on: PAD Service Signal Explanation CLR Indication that the call has been cleared (not accepted) ERR A PAD command signal is in error COM The call is connected Note: Some vendors have non-standard service signals like "connected" or similar. The following table shows the RAS X.3 specifications. This table was taken from the Windows NT 4.0 Networking Supplement Chapter 9, titled "X.25 PAD Support." For a detailed description of the standard 22 X.3 parameters, see "Appendix B: General X.3 Parameter Description." Give these specifications to your X.25 provider and the information in the following two paragraphs as well. The values set for the X.3 parameters in the following table apply to the X.25 network equipment of most providers, however, there are exceptions. The values in the table are derived from a SprintNet X.25 network. If your X.25 provider has different X.25 equipment the X.3 values may actually specify different unit sizes or may mean "off" when the same value on SprintNet means "on". Therefore, to achieve the same effect as on a SprintNet network you may have to specify values that differ from the values in the table below. For example, if you set parameter 4: to 4:1 to set the Idle Timer to an interval of 50 miliseconds on a Sprintnet x.25 network, the same setting of 4:1 may mean an interval of only 20 miliseconds on your x.25 carrier network (e.g. on a SITA Group Network 4:1 means 40 miliseconds as of July 1994). Therefore, parameter 4: should be set to 4:2 in this example (or should be tried at 4:1 or 4:2 for the SITA Group Network). As the example demonstrates, it is essential to know how the values for the X.3 parameters on your x.25 carrier network map to those of a SprintNet X.25 network or you may experience connection problems. See your X.25 providers documentation for more specific information on your X.25 network if you have problems. A note on syntax. The parameter number is always separated from the parameter value by a colon. for example, to set parameter 1 to a value of 126 you type: 1:126 To set parameters 1 and 2 to the value 0 (zero) from within the RAS Terminal Window separate the two parameter/value pairs with a space. For example: set 1:0 2:0 and press the Enter key. X.25 Configuration Values Parameter number X.3 parameter Value 1 PAD Recall 0 2 Echo 3 Data Fwd. Char 4 Idle Timer 5 Device Ctrl 6 PAD Service Signals 7 Break Signal 8 Discard Output 9 Padding after CR 10 Line Folding 11 Not Set 12 Flow Control 13 Linefeed Insertion 14 Padding after LF 15 Editing 16 Character 17 Line 18 Line Display 19 Editing PAD Srv Signals 20 Echo Mask 21 Parity Treatment 22 Page Wait Caution: Failure to set these values as shown prevents the Remote Access Service from functioning properly. For information on setting these values, see the instructions with your X.25 smart card. Some X.25 network providers use additional proprietary parameters that are extensions to the standard X.3 set of 22 parameters. Northern Telecom Equipment may support the following X.3 parameter extensions. Not all extended X.3 parameter information was available or confirmed at the time of publishing this paper, therefore, please do not rely on the information in the following table. Instead, please consult your X.25 provider whether extended X.3 parameters are used on the X.25 network. The information in this table is provided as an example only and will be updated as information becomes available. Example Extended Parameter Table Extended Parameter Number Extended Parameter 113 ? 114 Abort Output 118 119 120 121 Additional data forwarding? 122 123 Parity on Input data stream; 0 means OFF 124 125 Output Pending Timer 126 Linefeed insertion The SITA Group Network is an X.25 provider that supports these additional parameters and maybe more. Therefore, if your provider equipment uses extended X.3 parameters, it is important for RAS X.25 setup and troubleshooting to verify that all X.3 parameters are set correctly including these extended parameters. If problems occur they can be caused by setting the standard X.3 parameters correctly, but failing to configure the extended X.3 parameters which can override the X.3 parameters or vice versa. For example, the following parameters can cancel each other based on the example table of extended parameters above: X.3 Parameter Number Corresponding Example Extended Parameter Number Consult your X.25 provider for a complete list of parameters that can cancel each other's effect, if applicable. If you plan on using a non-Eicon X.25 card, verify first that the drivers are available to work with the version of your RAS server. If you have a non-Eicon X.25 card in a RAS client, but the Eicon X.PAD script does not work, modify the script according to your X.25 card vendor and your X.25 providers requirements to initialize the PAD and make the X.121 address call to the RAS server. To do that, first make a copy of the Eicon X.PAD script and copy it to the end of Pad.inf. Rename the script title so that it is unique in Pad.inf and then start modifying the copied script. For more information on modifying Pad.inf scripts, read the section below titled "Writing a Pad.inf Script to Automate Remote RAS Logons." Use an Existing Script or Write a New Script Instead of using the interactive RAS Terminal screen, you can automate the logon process to X.25 providers by using an existing script in the Pad.inf file if you are using an X.25 provider for which a script is provided. See the section above "For Which X.25 Providers Exist RAS Dial-Up Scripts in the Pad.inf File". Keep in mind that these scripts are provided only for your convenience and are not guaranteed by Microsoft to work because X.25 providers may change or upgrade their equipment at any time which may cause the requirements for scripts to change, in effect making the scripts obsolete. If this is the case in your situation or if your X.25 provider is not listed at all you can create a modified or new script in the Pad.inf file yourself. Pad.inf was specifically designed for X.25 scripts, but most of the information pertaining to Pad.inf scripting also applies to Switch.inf scripting. However, Pad.inf supports a few more features and has a few more requirements. If you have problems using Pad.inf under Windows for Workgroups please see "Potential Pad.inf Problem in Windows for Workgroups" below in the Troubleshooting section. Creating Scripts for the Ras Pad.inf File Note: In Windows NT version 4.0, you must use the script language described in this section if you are planning to use Pad.inf for X.25 dial-up. The new and improved script language described in Script.doc in Appendix F (also in the Windows NT 4.0 %SystemRoot%\System32\RAS directory) does not work in Pad.inf. RAS X.25 dial-up has not been tested with the new script language using a regular *.scp file. If you are planning to run your script on a previous version of RAS as well, you must use the Pad.inf script language described in the following paragraphs: The Pad.inf file is like a set of small batch files or scripts, all contained in one file separated by script titles called section headers. A Pad.inf script can contain six elements: a section header, comments, commands, responses, response keywords, and reserved macro keywords. In addition to dividing the Pad.inf file into individual scripts, section headers start the scripts. Comment lines are used to document the script. Any other line in a script is either a command or a response. A command is issued from the local RAS client. A response is received from the remote device or computer. To write an automatic script for your RAS client you must know the required commands and corresponding responses for the intermediary device. The commands and responses must be in the exact order that the device expects to encounter them. Branching statements, such as GOTO or IF command, are not supported by the Pad.inf and Switch.inf script language. The required sequence of commands and responses from the PAD device should be in the device documentation. If you are connecting to a commercial service, the required sequence of commands and responses should also be available from the service support staff. The Pad.inf file contains scripts for different X.25 providers or different PADs that the RAS user calls. The scripts are activated by configuring Remote Access Phonebook entries as described under "Configuring a Windows NT 3.x or WFWG 3.11 RAS Client for a Pad.inf X.25 Script" and "Configuring a Windows NT 4.0 RAS Client for a Pad.inf X.25 Script." For additional information on writing a Pad.inf script, see the instructions in the Windows NT 3.5 RAS manual, pages 65-67 SECTION HEADERS A section header marks the beginning of a script, is enclosed in square brackets, and cannot exceed 31 characters. For example: [Route 66 Login] Each script requires one section header. The section header appears in the RAS Phonebook field allowing you to select RAS Terminal or any other Pad.inf script. For more information, see the "Activating Pad.inf Scripts" section below. Comment lines are preceded by semicolons (;) in the left most margin (column one), are optional, and can be placed anywhere in the file. For example: ;This script was created by Patrick on September 29, 1995 COMMANDS A command comes from the local computer. A response comes from the remote device or computer. The COMMAND= statement, which can be used in three different ways, is used to send commands to the intermediary device. Note: Use upper case for all Pad.inf commands. DEVICETYPE=pad DEVICETYPE=pad tells RAS that it is not communicating with a modem which is the default if DEVICETYPE= is completely omitted. Use the string as the first line of your script only if your are writing a new script for communication with an X.25 adapter. DEFAULTOFF= DEFAULTOFF= by itself specifies that no default off commands are going to be send to the PAD. This command allows you to specify a custom macro, but is usually only used in the Modem.inf file. See the Modem.inf file for examples. MAXCARRIERBPS=<bits per second> MAXCONNECTBPS=<bits per second> The MAXCARRIERBPS=<bits per second> and MAXCONNECTBPS=<bits per second> should be specified for every X.25 script. These parameters specify a bits-per-second (bps) rate, however, these rates are not dictating the actual maximum carrier or connect rate. Instead, they indicate to the RAS server to wait for a response from the client as long as it would wait for a response over a connection of that bps rate. For example, the client-server carrier speed may occur at 9600 bps, but both parameters may be set as follows to allow the server to wait longer before timing out: MAXCARRIERBPS=1200 MAXCONNECTBPS=1200 The following setting allows the server to time out faster: MAXCARRIERBPS=9600 MAXCONNECTBPS=9600 COMMAND= COMMAND= by itself causes an approximate two second delay, depending on CPU speed and the presence of caching software (for example, SMARTDRV.EXE). If the intermediary device cannot process all of the characters on a COMMAND=<CustomString><cr> line (because they are sent at once), use multiple COMMAND= statements. COMMAND=<CUSTOM STRING> COMMAND=<custom string> sends the custom string and causes a slight delay of several hundred milliseconds (depending on CPU speed and installed caching software). The delay gives the intermediary device time to process the custom string and prepare for the next command. COMMAND=<CUSTOM STRING> <cr> COMMAND=<custom string><cr> sends the custom string immediately. Consult the remote device documentation to determine which strings are required. RESPONSES Each command line is followed by one or more response lines. Consult the remote device documentation to determine which response strings and dialogs are expected by the remote device. RAS compares responses received with what you put on the response lines. If it matches, RAS uses the response related keyword and macro. RESPONSE KEYWORDS The keyword in a response line specifies what your RAS client does with the responses it receives from the remote computer. The response strings your RAS client receives from the remote device or online service can be used with one of the following keywords in response lines: OK= remote_device_response <macro> The OK keyword indicates that RAS can execute the next script line if the condition on the right side of the equal sign is met. LOOP= remote_device_response <macro> The LOOP keyword indicates to RAS to wait for a response that matches the condition to the right of the equal sign and if that condition is met to wait for the same response again. If a response does not meet the condition on the right side of the equal sign RAS executes the next line. CONNECT= remote_device_response <macro> This keyword is used near the end of the script to indicate the end of the script. ERROR_NO_CARRIER= remote_device_response <macro> This keyword is used to test for the presence of a carrier. Intermediary devices report their presence in different ways. ERROR_DIAGNOSTICS= remote_device_response <macro> This keyword can be used in conjunction with the <Diagnostics> macro to allow RAS to display a message box containing a problem cause and diagnostic information. These response related keywords are usually clustered, but do not have to be. CONNECT= is usually the last line, unless it is followed by an ERROR_line. For example: CONNECT=<match>" CONNECT" ERROR_NO_CARRIER=<match>"NO CARRIER" ERROR_DIAGNOSTICS=<cr><lf><Diagnostics> ERROR_DIAGNOSTICS=<lf><cr><lf><Diagnostics> NoResponse The RAS client always expects a response from the remote device. The client waits until a response is received unless a NoResponse statement follows the COMMAND= line. If there is no statement for a response following a COMMAND= line, the COMMAND= line still executes, but the script does not execute any further. RESERVED MACRO KEYWORDS The macros in the following list are reserved words, which you cannot define in Pad.inf to create a new entry. Reserved words are case insensitive. Macro Function <x25address> Inserts the value you type in the "X.121 Address" field (Windows NT 3.x, WFWG 3.11) or "Address" field (Windows NT 4.0) of the RAS application (Dial-Up client). <userdata> Inserts the value you type in the User Data field in the RAS application (Dial-Up client). <facilities> Inserts the value you type in the Facilities field in the RAS application (Dial-Up client). <cr> Inserts a carriage return. <lf> Inserts a line feed. <match> Reports a match if the string enclosed in quotation marks is found in the device response. For example, <match>"Smith" matches Jane Smith and John Smith III. <?> Inserts a wildcard character. For example, CO<?><?>2 matches COOL2 or COAT2, but not COOL3. <hXX> Allows any hexadecimal character to appear in a string including the zero byte, <h00>. (XX represents hexadecimal digits) <ignore> Ignores the rest of a response from the macro. For example, <cr><lf>CONNECTV-<ignore> reads the following responses as the same: "crlfCONNECTV-1.1" and "crlfCONNECTV-2.3." If a lot of information is ignored, like a large welcome banner, RAS might time out and move on to the next script line. This usually causes problems. To avoid this problem, use multiple pairs of COMMAND= followed by OK=<ignore> to force RAS to wait longer and ignore additional response stings. For example: COMMAND=OK=<ignore> COMMAND= OK=<ignore> <diagnostics> This macro function can be used in conjunction with the ERROR_DIAGNOSITICS= keyword macro to allow RAS to display a message box containing a problem cause and diagnostic information. This topic describes each part of a relatively long X.25 script for a fictitious X.25 provider. Not every script has to contain as many commands as this one, but for training purposes this type of script is most helpful. Please see the Pad.inf file for the following examples of short scripts: Compuserve or Alascom/Tymnet/MCI. Every script must start with a script header followed by DEFAULTOFF=, MAXCARRIERBPS=<baudrate>, and MAXCONNECTBPS=<baudrate> to properly inform the RAS client software of transmission speeds and corresponding expected delays, for example: [Fictitious X.25 Provider] DEFAULTOFF= MAXCARRIERBPS=9600 MAXCONNECTBPS=9600 Next issue a command to the remote computer, followed by one or more response lines. This initial command might be simply to wait for the remote computer to initialize and send its logon banner. The initial command depends on your X.25 carrier equipment. In some cases it is necessary to wait two seconds after making the telephone connection to the X.25 PAD modem so the PAD can initialize and get ready to receive commands from your RAS client. So a COMMAND= to cause a two second delay should be used first in such a case, otherwise not. Note, the COMMAND= is preceded by a line with a semicolon indicating to RAS it is a comment line. Use comments to allow easy debugging in the future. Near the end of this walk-through the comment lines are used instead of regulary text: ; Give the PAD 2 seconds time to initialize after the modem connection. COMMAND= If there is no response expected, RAS needs to be informed about that or it waits forever for a response. So the next line should be: NoResponse On a computer with a Pentium processor you may need to add another COMMAND= followed by NoResponse to achieve an approximate two second delay since the COMMAND= delay is controlled by CPU speed or slowness not actual time. Next the X.25 provider requires you to select 8 data-bit communication and send an "@" sign to the PAD upon which the PAD is not expected to respond. To delay RAS from moving on to the next command too fast the carriage return (<cr>) is left out: ; set 8 databit mode COMMAND=@ NoResponse Next a "D" should be sent to configure the PAD for 9600 baud. However, through testing it was found that the @ command needs about 3 seconds to be processed by the PAD before it is ready to receive the next command; so to give the PAD some more time (about 2 more seconds) in addition to leaving out the <cr> in the previous command, use the following two lines: ; Delay RAS 2 more seconds to give the PAD time to process previous command COMMAND= NoResponse Next send the D without a carriage return to cause RAS to delay moving on to the next command; the response can be ignored: ; D sets 9600 baud on X.25 network COMMAND = D OK=<ignore> Next the caller's user id and password need to be entered. ; Enter your user id in the Remote Access program's X.25 Settings "User ; Data:" field. COMMAND=<UserData><cr> The response from the PAD after the user ID is not important: OK=<ignore> Next the caller's password needs to be provided to the X.25 provider. For that the Facilities variable is used which allows the user to enter the password in the RAS user interface: ; Enter your x.25 password in the Remote Access program's X.25 Settings ; "Facilities:" field. COMMAND=<Facilities><cr> The response should contain the word "active" if the user ID and password were recognized: OK=<match>"active" If the user ID and password are not recognized, the error information provided in the response from the PAD should be displayed by RAS with the help of the <Diagnostics> macro. Since the X.25 equipment error response has two versions (preceded by a different number of carriage returns and line feeds), two error lines are added to make sure both error response versions are received by the RAS client and displayed on the screen: ERROR_DIAGNOSTICS=<cr><lf><cr><lf><lf><Diagnostics> ERROR_DIAGNOSTICS=<lf><cr><lf><Diagnostics> Next profile 6 is invoked to set all 22 (or more) X.3 parameters to values that are close to the X.3 RAS specifications. The few parameters that are known to differ from the X.3 RAS specifications will be set later: COMMAND=PROF 6<cr> The profile set the X.3 Echo facility to Off (2:0) which prevents responses from the PAD being send to the RAS client. It is important to see the responses from the PAD so problems can be identified, therefore parameter 2 is set to On again: ; turn echo back On to allow PAD responses to be sent to RAS client COMMAND=SET 2:1<cr> OK=<ignore> ; set the standard parameters that the profile didn't set to X.3 RAS specifications yet. COMMAND=SET 4:1,6:1,16:0,17:0,18:0,19:0,21:0<cr> OK=<ignore> ; set certain extended X.3 parameters so they don't undo what the standard ; parameters were configured for COMMAND=SET 118:0,119:0,120:0<cr> OK=<ignore> If you are in the process of debugging X.3 parameter settings use the following two lines to request the PAD to send all X.3 parameter settings information to the RAS client so you can capture it in the Device.log or Modemlog.txt files. COMMAND=PAR? ; Call the RAS X25 server COMMAND=<x25address><cr><lf> ; CONNECT response means that the connection completed fine. CONNECT=<match>"COM" ; ERROR_DIAGNOISTICS response means connection attempt failed - the X25 ; DIAGNOSTIC information will be extracted from the response and sent ; to the user. ; ERROR_NO_CARRIER means that the remote modem hung up. ; ERROR resonses are for generic failures. ERROR_NO_CARRIER=<match>"NO CARRIER" ERROR_DIAGNOSTICS=<cr><lf><Diagnostics> ; Finally turn echo Off again to comply with X.3 RAS specifications COMMAND=SET 2:0<cr> NoResponse ; End of Pad.inf script Creating One Script for Multiple Situations RAS for Windows NT 4.0 supports a new script language that supports subroutines, IF, WHILE, and GOTO command, etc. which allows for complex scripts, however, this script language does not work in the Pad.inf file. It is designed to work with PPP and SLIP dial-up connections in *.scp files invoked in RAS under the Script tab (not the X.25 tab) and has not been tested with X.25 dial-up. Windows 95 supports the same script language, but has not yet been tested with X.25 and therefore is not supported in that environment; however, it may work. A company with employees working at different locations may need to provide employees with the ability to log on to an X.25 service from various locations requiring different scripts. Not all employees may have the same RAS versions and the RAS script language on pre-Windows NT 4.0 RAS does not provide any IF, or GOTO statements or support for subroutines. Therefore, you cannot test for logical responses or errors received from a PAD and then branch off to a different execution path. However, the script language does allow you to catch errors and display them on the screen using: ERROR_DIAGNOSTICS=<cr><lf><Diagnostics> To provide a variety of RAS clients with a Pad.inf or Switch.inf script you need to write several scripts in the Pad.inf file to manage all local logon dialog variations. For example: If you have a Windows for Workgroups 3.11 RAS client or Microsoft RAS 1.1a client, set an environment variable to a value representing the local X.25 carrier. Then run a batch file that copies the correct script to the file name Pad.inf or switch.inf (depending on the value of the environment variable) and then start Windows. If you have a Windows NT RAS client, create an icon that runs a similar batch file that tests the environment variable and then runs RAS. All scripts can be provided on one disk and all the user has to do is copy the files to a directory on their hard drive and set the environment variable. This can be automated as well to minimize user interaction. The following are often especially difficult problem areas when implementing a RAS X.25 network: On the Client side: Writing a new logon script for communication with the dial-up PAD. Setting the x.3 paramenters on the x.25 providers network. On the Server side: The server side usually does not experience difficult problems unless the Eicon software is erroneously changed from the default settings. To ensure proper operation of the dial-up X.25 RAS client and server, do the following: Verify your RAS client can establish a simple RAS link with the RAS server over a regular public switched telephone network (PSTN) using a supported modem. This verifies the RAS software on client and server is properly working and the serial port, cable, and modem on the client side are properly configured. Note: If problems persist over PSTN and you have an internal modem in the client or the serer, test with an external modem. This configuration is usually easier to troubleshoot because the modem can be easily reset by switching it on and off while an internal modem can only be reset completely if the computer is turned off and on again. External modem configurations are also less prone to interrupt and i/o address conflicts. To isolate the problem to the client or the server it is often useful to have two RAS clients with different versions of RAS to test with. If the problem occurs with one client and not the other, you know that the server is configured correctly and the client that failed has the problem. If problems persist PSTN connections, use the MS RAS troubleshooters and the MS Knowledge Base available on the MS support web site or query your TechNet CD. If simple PSTN connections work and the problems over a X.25 connection persist, the problem lies with the X.25 network or the RAS server X.25 configuration. Read item number 2. Make sure the Eicon software and hardware is installed on the RAS server without changing any default settings, for example, do not change the COM port numbers Eicon software is configured to use. For example, Eicon WAN Services version 3, release 1 on Windows NT Server 3.51 uses COM10 as the default COM port. If problems persist, read item 3. Verify that you can type messages back and forth over x.25 using the Windows Terminal (not the RAS after-dialing Terminal window!) application on the client and server side. If this works but RAS connections do not, the server Eicon software is probably configured correctly and the problem lies with the X.3 parameter settings not matching the RAS specifications. Read item 4. Verify that your X.25 provider has configured the X.25 network according to the RAS X.3 specifications above or if it is your responsibility to set the X.3 parameters for each call, verify that you are setting them correctly. To resolve this problem, If you are using a script: Turn on logging on your RAS client (See "Enabling Logging and Creating a Device.log file" below). Insert the command COMMAND=PAR? into your X.25 dial-up script after your X.25 vendor user account name and password and after your attempt to set the X.3 parameters so you can later verify the result of your attempt. Call the RAS server to run the script with the PAR? command. Verify that the Device.log or Modemlog.txt file captured the X.3 parameter settings. Compare the X.3 parameters in the Device.log file to the Microsoft X.3 RAS Specifications above and note any mismatches. Be sure to read "X.3 RAS Specifications and Potential Problems" above. If any parameter values are set incorrectly, correct your script in Pad.inf to set them to the correct value. If you are already setting them to the correct value in your script, the problem may be a timing issue where the RAS client is sending the script command too fast for the PAD to process. Adjust your script by inserting more COMMAND= statements to delay RAS and give the PAD more time to process the previous command (See "Stepping Through an Example Script" above). There is also a possibility of you trying to set a parameter that is not changable by users. For example, many X.25 providers have a fixed terminal speed (X.3 parameter 11) and any attempt to change that setting causes an error. Check with your X.25 provider if there are any other X.3 parameters that are "frozen." If you are using RAS Terminal: After you type your X.25 vendor user account name and password and after typing the X.3 parameter settings, type the PAR? command. Write down the resulting PAD response with all the X.3 parameter settings and compare it to the RAS X.3 specifications. If any parameter values are set incorrectly, you may not have specified them when you gave the Set command. Be sure to set them to the correct value the next time. In all troubleshooting, the cause needs to be identified through tests that isolate the problem to the component that is not working properly. To be able to isolate a problem quickly an understanding of the following is helpful and sometimes necessary: Hardware and software components and their functionality. Required sequence of events and expected feedback from the software. Troubleshooting tools available. A. Hardware and Software Components of X.25 Configurations Verify that all the following hardware and software components are installed and configured correctly: RAS Client Software RAS software X.25 Logon Script (unless RAS Terminal is used) Latest X.25 card driver (if X.25 SmartCard is used and supported in this client) Hardware Modem X.25 SmartCard (if used) Phone line X.25 Provider X.3 settings X.29 settings X.28 Packet Assembler/Disassemblers (PADs) RAS Server X.25 card driver(if X.25 SmartCard is installed) Wan Services if applicable X.25 SmartCard (unless external PAD is used) Cables B. The Sequence of Events During RAS X.25 Connections This section has two subsections because the events during RAS X.25 dial-up connections differ from the events during direct RAS X.25 connections (where both the RAS client and the RAS server have an X.25 card installed). Read the section that applies to your configuration. X.25 Dial-Up Connections For a successful dial-up RAS connection to a remote network over X.25 to a RAS X.25 server, the RAS client needs to make five successfull connections, that is, the RAS client has to use command mode five times and each time following the command mode switch into a data transfer mode that becomes the command mode for the next connection level. After the fifth connection the user can issue network commands to the remote network or run applications from the remote network: Connection I—Connect to the dial-up PAD of the X.25 provider. Connection II—Authenticate with the X.25 provider software. Connection III—Connect over X.25 to RAS server X.25 adapter. Connection IV—Authenticate with the RAS server service. Connection V—Log on to the remote Windows NT domain. These five connection phases can be broken down into the following actions: X.25 Dial-Up Connection I The RAS client dials the number for the X.25 service providers dial-up PAD. The dial-up PAD and the client modem establish a connection. (The RAS server is not involved yet, that is, the RAS server is not aware of the RAS client call to the X.25 dial-up PAD.) X.25 Dial-Up Connections II and III The RAS client supplies miscellaneous information, including but not limited to: For X.25 Dial-Up Connection II PAD configuration parameters to set databits, echo, etc. User name to identify the user for billing purposes to the X.25 provider. Password to maintain security on the X.25 customer account. For X.25 Dial-Up Connection III X.3 parameters to configure the X.25 network according to RAS specifications. The RAS server X.121 address to call the RAS server host PAD configured with Eicon WAN services. The host PAD accepts the X.25 connection and the client and host PADs go into data transfer mode. This information is supplied by the RAS client through two methods depending on your configuration: The RAS client displays a RAS Terminal window (available in Windows For Workgroups 3.11, Windows 95, Windows NT 3.1, 3.5, 3.51, and 4.0) and the user types the information. The RAS client executes a logon script that provides the information with the correct timing which is critical. X.25 Dial-Up Connection IV The RAS client and the RAS server begin the RAS client authentication conversation. If the RAS client is authenticated as an authorized RAS client, the RAS connection is established and the RAS client and server go into data transfer mode. X.25 Dial-Up Connection V The RAS client logs on to the remote domain for network access. RAS 1.1 clients need to log on with the Net Logon command. RAS for Windows for Workgroups 3.11 clients can log on using Log On/Off icon in the Network group in Program Manager. Windows NT RAS clients are logged on automatically with the credentials they provided during system startup, or if those credentials differ from the remote domain credentials are prompted for their credentials for the remote domain when accessing resources that require proper permission. X.25 Direct Connections For a RAS client to connect to a remote Windows NT network over a direct X.25 to X.25 RAS connection between RAS client with a X.25 card installed and RAS server (also using an X.25 card), the RAS client needs to make three successfull connections: Connection I—Connect over X.25 to the RAS server Eicon WAN services. Connection II—Authenticate with the RAS server service. Connection III—Log on to the remote Windows NT domain. X.25 Direct Connection I The RAS client issues X.3 PAD configuration parameters to set data bits, echo, etc. and calls the RAS server X.121 address using a script in Pad.inf. The RAS server host PAD configured with Eicon WAN services responds and accepts the X.25 connection. The client and host PADs go into data transfer mode. X.25 Direct Connection II The RAS client and the RAS server begin the RAS client authentication conversation. If the RAS client is authenticated as an authorized RAS client, the RAS connection is established and goes into data transfer mode. X.25 Direct Connection III The RAS client logs on to the remote domain for network access. RAS 1.1 clients for OS/2 need to log on with the Net Logon command. Windows NT RAS clients are logged on automatically with the credentials they provided during system startup, or if those credentials differ from the remote domain credentials are prompted for their credentials for the remote domain when accessing resources that require proper permission Before writing scripts to automate the process of logging onto a PAD, use the RAS Terminal feature to familiarize yourself with the logon sequence of events. For more information on activating the RAS Terminal feature, see the section "Implementing an X.25 RAS Client-to-Server Dial-Up or Direct Connection" above. To find errors that prevent your scripts from working, log all information passed between RAS, the modem, and/or the PAD (including errors reported by any other intermediary device in your configuration) by turning on RAS logging (see "ENABLING LOGGING AND CREATING A Device.log FILE" in this section below). After you enable logging, the Device.log file is created (when you start RAS) in the Windows NT %systemroot%\SYSTEM32\RAS subdirectory or the Windows for Workgroups \WINDOWS directory. If an error is encountered during script execution, execution halts. Determine the problem by studying any RAS error messages that appear on the screen and by studying the Device.log file contents. Make necessary corrections to the script and then restart RAS. If you are running Windows for Workgroups RAS 3.11 and the script execution halts even though you verified all lines to be error free, read the paragraph titled "Potential Pad.inf Problem in Windows for Workgroups" in the implementation section below. The Device.log file appends any communication as long as RAS is not restarted. If you restart RAS, the Device.log file is erased and re-created. Therefore, if you make changes to Pad.inf during your script development which always require you to restart RAS, and you need to save the current traces contained in the Device.log file, rename the Device.log file before starting RAS again. To enabling logging and creating a Device.log file under Windows NT 3.x or 4.0: Warning: Using Registry Editor incorrectly can cause serious, system-wide problems that may require you to reinstall Windows NT to correct them. Microsoft cannot guarantee that any problems resulting from the use of Registry Editor can be solved. Use this tool at your own risk. Hang up any connections, and exit from Remote Access. Run Registry Editor (REGEDT32.EXE). From the HKEY_LOCAL_MACHINE subtree, go to the following key: SYSTEM\CurrentControlSet\Services\RasMan\Parameters Change the value of the Logging parameter to 1: Logging:REG_DWORD:0x1 Logging begins when you restart Remote Access or start the Remote Access Server service (if your computer is receiving calls). You do not need to shutdown and restart Windows NT. To enable logging and create a Device.log file under Windows for Workgroups: Using a text editor such as Windows Notepad, edit the SYSTEM.INI file. In the [Remote Access] section, add the following line: LOGGING=1. Save the file. In Windows 95 the Device.log text file is named Modemlog.txt instead and is created in the Windows directory when you restart Windows and RAS. To enable logging and create a Modemlog.txt file under Windows 95: Click the Connection tab and click on Advanced. Click the Record A Log File check box. Many script problems occur because of timing problems in the command-response dialog between the RAS client and the dial-up PAD. Often the RAS client sends commands too soon to the dial-up PAD because the PAD is still processing the first command while the second command already arrives from the RAS client. In those cases the second command is lost by the time the PAD becomes available. If the PAD has a buffer to store the command, the timing becomes less of an issue, but most PADs do not have buffers for storing remote commands. To diagnose timing problems and measure timing intervals precisely, you may use a serial analyzer such as a Hewlett Packard HP 4957A, but a serial analyzer is not required. Serial analyzers can be expensive relative to using the trial and error method of troubleshooting a script which takes longer, but usually is successful, too. To prevent the RAS X.25 client from sending commands too soon, use the COMMAND = without the carriage return macro (<cr>), or the COMMAND = <PAD_X.28_or_other_command> without <cr>, and the LOOP = commands. The tricky part is that the amount of time delayed with these commands varies by the speed of your processor and the presence of caching software in your RAS client. Therefore, on a 386-based computer the COMMAND= and the COMMAND= <PAD_X.28_or_other_command> cause a longer delay than on a 486-based or Pentium-based computer. For example, if your Device.log or Modemlog.txt file shows that a certain command in your script is not received by the PAD, insert a "COMMAND=" line before that command to gain about 2 second delay. For more information, see the text and the examples in the Switch.inf and Pad.inf files in the Windows NT <SystemRoot>\System32\RAS directory. After the X.25 provider has given you the commands you need to send from your script or RAS Terminal screen to log on to the dial-up PAD, start out by issuing the commands manually in a RAS Terminal window. Once you verified that this works you can start building a script either from scratch or by copying one of the existing scripts to the end of the Pad.inf file and renaming and modifying that script to suite your X.25 provider equipment. RAS for Windows for Workgroups 3.11 reportedly may under certain circumstances not execute Pad.inf scripts completely even though no error was encountered in the script. If that problem occurs you can try copying your Pad.inf script to the Switch.inf file and replace the special Pad.inf macros X25address, Userdata, and Facilities (if applicable) with the actual values because Switch.inf does not support these macros in Windows for Workgroups. See the section "Activating an X.25 Script in Switch.inf in Windows for Workgroups 3.11 RAS." Note: If you are using Windows for Workgroups make sure that you have the RAS program files with a file date of April 1994 or later installed. The update to these RAS files is free. Call Microsoft Technical Support to obtain these files. These files make RAS memory usage more efficient and eliminate error 640 and other symptoms. For more information see the Microsoft Knowledge Base on or your TechNet CD. See the information provided above under "X.3 RAS Specifications and Potential Problems." Problem: After calling the RAS server X.121 address the following server PAD response appears on the X.25 provider monitoring equipment: call cleared - remote directive (the X.25 provider helpdesk person sees this on the screen) and the RAS Client Device.log file captures: clr conf Solution: This is a sign of the RAS server Eicon card not being properly initialized. Make sure that the Eicon software is not changed from the default configuration. Verify the COM port used by the Eicon software, for example, COM10, is not changed to another COM port, for example, COM4. Eicon Software Trace Utility Eicon Technologies provides the EiconCard Loadable Module Management Utility (ECMODULE) which allows the tracing of all activity on the Eicon card. These traces may have to be interpreted with the help of an Eicon support engineer. The following are X.25 troubleshooting tips from the Windows NT 4.0 Help File: Problem: After connecting through a dial-up PAD, the server consistently fails to authenticate the client. Solution: If the remote access server is running and clients cannot connect to it directly through an X.25 smart card or an external PAD, the dial-up PAD may have the wrong X. 3 parameters or serial settings. Ask your administrator for the correct settings, listed in the Chapter 9 "X. 25 PAD Support" in the Networking Supplement of Windows NT Server 4.0. Problem: A connection has been established, but network drives are disconnecting, and you are dropping sessions or getting network errors. Solution: Congestion on the Remote Access server's leased line may be the cause. The administrator should make sure that the speed of the leased line can support all the COM ports at all speeds clients use to dial in. For example, four clients connecting at 9600 bps (through dial-up PADS) require a 38,400-bps (four times 9600) leased line on the server end. If the leased line does not have adequate bandwidth, it can cause timeouts and degrade performance for connected clients. This example assumes the Remote Access Service is using all the bandwidth. If it is sharing the bandwidth, fewer connections can be made. Problem: While transferring files, you frequently get the error messages "Network drive disconnected" or "Network drive no longer exists." Solution: On X. 25 smart cards, change the Negotiate network parameters option in the X. 25 settings to Yes. This problem arises when X.25 parameters, such as the size of the send and receive window, are set differently for the server, the network, and the client X.25 software. By enabling the Negotiate network parameters option on the client's (if using the direct X. 25 connection) and the server's X.25 software, you let the server, the network, and the client use commonly negotiated X.25 network parameters. Topic 4 is not included here. This topic is better explained with this article. Problem: A modem connected to a dial-up PAD connects a lower speed than it should. Solution: Replace the modem with a compatible one from the list in the Setup program. The following section is from the Windows NT 3.51 and 4.0 RasRead.txt file: Troubleshooting Remote Disconnections When a client connection is cleared, the system event log of the RAS server running X.25 can be examined for an error message. The event log can record why the remote client or the remote network disconnected. If the remote client (through a dial-up PAD or local PAD) disconnects, the following warning message will appear in the system event log: Remote DTE cleared the X.25 call on XPADxxx, X.25 Return Codes: Cause yy (hex) Diagnostic yy (hex) The "XPADxxx" is the port name defined in the XPAD configuration. "yy" is a hex string. For DTE clearing the cause will always be 00. The Diagnostic code can be 00--indicating that the remote client requested a disconnect--or another non-zero value. When the diagnostic code is non-zero it indicates a clearing due to the remote client's dial-up PAD service. Contact the remote client's X.25 service provider to determine the problem. If the X.25 network disconnects, the following warning message will appear in the system event log: Network cleared the X.25 call on XPADxxx, X.25 Return Codes: Cause yy (hex) Diagnostic yy (hex) The "XPADxxx" is the port name defined in the XPAD configuration. "yy" is a hex string. For a network clearing the cause value will always be a non-zero value. The diagnostic code in the cause can be any value. Consult your local X.25 service provider with the cause and diagnostic value to determine the exact reason for the network disconnect. Chapter 9 - X.25 PAD Support An X.25 network uses a packet-switching protocol to transmit data. This protocol relies on an elaborate worldwide network of packet-forwarding nodes (Data Communications Equipment [DCEs]) that participate in delivering an X.25 packet to its designated address. Dial-up Asynchronous Packet Assemblers/Disassemblers (PADs) constitute a practical choice for Remote Access clients because they don't require an X.25 line to be plugged into the back of the computer. Their only requirement is the telephone number of the PAD service for the carrier. Note: This chapter is specific to X.25 PADs. X.25 cards can also be supported through WAN miniport drivers. The Remote Access Service lets you access the X.25 network in two general ways: Server/Client Method of access Client (for the Windows™ or Windows NT operating systems) Asynchronous Packet Assemblers/Disassemblers (PADs) Server and client (for Windows NT systems only) Direct connections The next section tells how to access the X.25 network in both ways for specific configurations. The Remote Access Service for X.25 networks offers two configurations for the client and one for the server: Table 9.1 X.25 Configurations Client/Server Configuration Client Dial-up PAD Direct connection to the X.25 network through X.25 smart card Server Similar in format to Modem.inf (which contains script information used to talk to the modem), Pad.inf contains conversations between the client software and the PAD. For details, see Appendix C, "Understanding Modem.inf." Pad.inf is located in the \systemroot\SYSTEM32\RAS folder. x25address diagnostics userdata facilities Caution: Using reserved words as macro names in Pad.inf could result in unpredictable behavior of the Remote Access software. Sample Pad.inf The following sample Pad.inf file will help you create a section within Pad.inf for your X.25 network. This example shows an entry for Sprintnet: [SPRINTNET] ;The following three lines are temporary. DEFAULTOFF= MAXCARRIERBPS=9600 MAXCONNECTBPS=9600 ; The next line will give a delay of 2 secs - ; allowing the PAD to initialize COMMAND= NoResponse COMMAND= NoResponse ; The @ character sets the SPRINTNET PAD for 8 databit communication. COMMAND=@ NoResponse COMMAND= NoResponse ; The D character requests a 9600 speed. COMMAND=D<cr> ; We don't care for the response so we ignore it. OK=<ignore> ; A carriage return line feed again to initialize ; the PAD read/write buffers COMMAND=<cr><lf> OK=<ignore> COMMAND=<cr><lf> OK=<ignore> ; Set X.3 settings on the PAD which make it work well with RAS. ; Broken into two parts since the line is too long. COMMAND=SET 1:0,2:0,3:0,4:1,5:0,6:1,7:0,8:0,9:0,10:0,11:0<cr> OK=<ignore> ; Set the other half of X.3 parameters COMMAND=SET 12:0,13:0,14:0,15:0,16:0,17:0,18:0,19:0,20:0,21:0,22:0<cr> OK=<ignore> ; Finally try to call RAS X25 server COMMAND=C <x25address><cr><lf> CONNECT=<match>"CONNECT" ERROR_DIAGNOSTICS=<cr><lf><Diagnostics> ; CONNECT response means that the connection completed fine. ; X25ERROR response means connection attempt failed - the X25 CAUSE and ; DIAGNOSTIC information will be extracted from the response and ; sent to the user. ; ERROR responses are for generic failures. After this sample conversation for SPRINTNET is completed (with the correct responses), the X.25 connection is established. If errors are detected during the PAD conversation, no connection is made. Note: The Remote Access Service currently works with PADs set to 8 data bits, 1 stop bit, and no parity. Consult the documentation for the PAD to see how to install these settings. In Pad.inf, you can use the COMMAND_ series of commands (COMMAND_INIT, COMMAND_DIAL, and COMMAND_LISTEN) or the generic COMMAND. But do not mix the two families of commands. For more information on the COMMAND_ series, see Appendix C, "Understanding Modem.inf." For troubleshooting information, see the Remote Access online Help. Operating between the client and the Remote Access server, an asynchronous PAD converts serially-transmitted data into X.25 packets. When the PAD receives a packet from an X.25 network, it puts the packet out on a serial line, making communication possible between the client and the X.25 network. Remote Access clients can connect with Remote Access servers through dial-up PAD services supplied by X.25 carriers, such as Sprintnet and Infonet. After the client's modem (modem A in Figure 9.1) connects to the PAD's modem (modem B), the client software must converse with the dial-up PAD. When their conversation is successfully completed, a connection is established between client and server. The conversation (command/response scripts) for the PADs supported by an X.25 carrier is stored in the Pad.inf file. Remote Access software supplies one example. To customize your PAD, see "Pad.inf Format," in this chapter. For example, Pad.inf contains two Sprintnet entries: Sprintnet Standard and Sprintnet Alternate. Generally, if you are calling through 9600 bits-per-second (bps) or faster dial-up PADs, try Sprintnet Standard. If you are calling through 2400 bps or slower dial-up PADs, try Sprintnet Alternate. If one Sprintnet entry fails to connect reliably, try the other one. Sprintnet dial-up PADs should work with both. Note: For dial-up PADs, you must use the COMMAND= format, not the COMMAND_INIT, COMMAND_DIAL, and COMMAND_LISTEN format. Figure 9.1 shows how a client connects to the Remote Access server through a dial-up PAD and the X.25 network. Note: For best results when using a dial-up PAD, use a modem that matches the one used by the PAD carrier (or at least matches the V. protocol supported by the carrier's modem). The following table compares connecting through dial-up PADs and connecting directly to the X.25 network: Table9.2 Comparison of Dial-Up PADs to Connecting Directly Dial-Up PAD Direct connection Saves the expense of a dedicated leased line (direct connection). Requires expensive leased line. Allows connections from hotels, airports, homes—anywhere a phone line is available. Requires users to dial in from a fixed location. Requires two steps to connect. Connects conveniently in one step. Limits maximum communication speed to whichever speed is slower, the modem's or the PAD's. Lets communication take place up to the speed of a leased line, 56 kilobytes (K). Allows less control in configuring PADs. Offers greater reliability. Only a client can connect through a dial-up PAD. Both servers and clients can connect directly. PAD and Serial Configuration To configure your PAD correctly, set the X.3 parameters according to the information shown in Table 9.3 later in this chapter. The configuration of the dial-up PAD should be as follows: 8 data bits 1 stop bit No parity serial communication For dial-up PADs, make sure your vendor supports this configuration. The PADs might already be set to the correct configuration for connecting directly through an internal X.25 smart card. If they are, do not change the configuration. RAS also supports connecting directly from the remote computer to the X.25 network through a smart card, which acts like a modem. An X.25 smart card is a hardware card with a PAD embedded in it. To the personal computer, a smart card looks like several communication ports attached to PADs. To access the X.25 network through a direct connection, you must have a direct line connection to an X.25 network (clients only) a smart card Note: The server side always requires an X.25 smart card, but the client side requires one only when connecting to the X.25 network directly. Note: For connecting to the network directly, you must use the COMMAND_INIT, COMMAND_DIAL, and COMMAND_LISTEN format. Figure 9.2 shows how the server and a Windows NT client (both equipped with smart cards) connect to the X.25 network directly. The Remote Access server does not support callback on X.25 networks. After installing Windows NT Server and adding the Remote Access Service, follow these steps: To set up the Remote Access server for an X.25 network Install the X.25 smart card (according to the manufacturer's instructions). A communications driver for the X.25 smart card that emulates communication ports is supplied by the hardware manufacturer or by a third party. Make sure your X.25 smart card is configured with the X.3-parameter values shown in Table 9.3. From the list of devices on the Remote Access Setup dialog box, select an entry corresponding to the X.25 smart card. In setting up the Remote Access server, make sure that the ports selected are configured for dial-in. Note: Make sure that the speed of the leased line can support all the serial communication (COM) ports at all speeds at which clients will dial in. For example, 4 clients connecting at 9600 bps (through dial-up PADs) will require a 38,400-bps (4 times 9600) leased line on the server end. If the leased line does not have adequate bandwidth, it can cause time-outs and can cause the performance for connected clients to degrade. Table 9.3 X.25 Configuration Values This section tells how to set up Remote Access clients for connecting to the X.25 network through PAD services and for connecting to the X.25 network directly. Following these steps to connect a client to an X.25 network: Dial from the client's modem to a PAD (modem-to-modem). Establish a connection over the X.25 network between the PAD and the server-side X.25 smart card. After you've established a connection, communicate as you would through any asynchronous connection. For a complete description of connecting through dial-up PADs, see "Accessing X.25 Through Dial-Up PADs," earlier in this chapter. Configuring Client PADs The client in Table 9.3 as soon as a connection is established through X.29 commands. To configure an X.25 smart card to make these changes, see the configuration manual for your specific card. To set up the client for connecting directly to the X.25 network, follow the procedures used in setting up the Remote Access server. See "Setting Up the Remote Access Server for an X.25 Network," earlier in this chapter. Make sure the communication ports are selected as dial-out. Connecting to a server through an X.25 network is similar to connecting through a phone line. The only difference is that the phone book entry must specify an X.25 PAD type and an X.121 address. To add a phone book entry with X.25 or to add X.25 to an existing entry See RAS online Help. This online Help also provides troubleshooting information. This is a general listing of X.3 Packet Assembler/Disassembler (PAD) parameters. There are 22 standard PAD parameters since 1984. Parameters 16, 17, 18, and 19 describe editing functions that are not used with RAS. For more information on how to set these parameters with RAS, see the paragraph titled "X.3 RAS Specifications and Potential Problems" above. Parameter 1: PAD recall Description: This enables an escape from the PAD on-line state (data transfer mode) to the PAD command state on receipt of a specific character. When the PAD receives this character, the PAD prompt is displayed on your terminal monitor. Parameter 2: Echo Description: This parameter is set to "0" (zero) for echo and "1" fornon-echo. One of the most frequently used parameters, this is handy for applications used for suppression of text when you are typing in your name and password. When this parameter is set, all characters received from the terminal are echoed, excluding those specified by parameters 5,12,20,22. Setting parameters 12 or 22 to non-zero values suppresses character echo (XON and OFF) even if parameter 2 is set to echo on. Parameter 3: Data Forwarding Signal (characters) Description: This value is bit-mapped. The PAD does not usually transmit one character at a time; it prefers large blocks of data, such as lines of text. A normal character used for this parameter is [carriage return = 2]. A PAD manufacturer's manual should have a table showing the options for this parameter. Parameter 4: Idle Time Delay Description: With the Data Forwarding, this parameter provides the capability to forward data to a host based on idle time. If there is data in the buffer and there has been no additional characters received in the IDLE TIME, the buffer is sent to the host. The time units are in 0.05 seconds and the values range from 1 - 255. The implementation of the time unit sizes can vary from vendor to vendor. Parameter 5: Flow Control - PAD to Terminal Description: This parameter tells the PAD to transmit the XON/OFF characters to the DTE, depending on the buffer state. Parameter 6: PAD Result Code Control Description: This controls how the PAD result codes are transmitted to the terminal. The parameter can stop the PAD from sending service codes back to the terminal in response to events as the X.25 calls for clear or reset. Parameter 7: Action on Receipt of Break from Character Terminal Description: This parameter is bit-mapped. The break action here is a sequence (from the host that the PAD is connected to) for indicating that attention is required. This can be used in the interruption of a long transmission that the host may think is hung in a loop or stuck in constant transmit mode. Parameter 8: Discard Output Description: Set this parameter so that you can abort a running process on the remote system by pressing a [break] key. If set to zero [0], normal data delivery is used. Parameter 9: Padding after Carriage Return Description: Specifies the number of {NUL} characters to transmit after a carriage return. Parameter 10: Line Folding Description: This allows for the formatting of data into regular line lengths when delivered to the character terminal. If a line length is specified, a [carriage return / line feed] must be transmitted. Parameter 11: Terminal Speed Description: This is a READ ONLY value. The parameter shows the current DTE speed. It is automatically set by the PAD using the last AT command. Baud Rate 110 300 1200 600 75 2400 4800 7200 or 9600 14400 or 19200 38400 Parameter 12: Flow Control of the PAD by Local Terminal Description: This is basically the opposite of parameter 5: it allows the character terminal to control the flow control. Parameter 13: Line Feed Insertion after Carriage Return Description: This specifies whether the PAD inserts a line feed character after carriage returns. This applies only in the PAD on-line state. Parameter 14: Padding after Line Feed Description: This is the same as parameter 9 except that the padding {NUL} characters instead of a carriage return are inserted after a line feed. Parameter 15: Editing Description: This specifies whether editing is used in the PAD on-line state. Parameter 16: Character Note: Parameters 16, 17, 18, and 19 help describe the available editing functions. If editing is enabled, parameter 4 (Idle Time Delay) is ignored. Description: This defines the delete character to delete the last characterin the editing buffer. ASCII 8 is the backspace character. Parameter 17: Line (buffer) Description: This defines the line delete character. ASCII 24 is <Ctrl-X>. Parameter 18: Line Display Description: This defines the line display. If you enter the character specified and editing is enabled, the editing buffer is displayed. ASCII 18 is <Ctrl-R>. Parameter 19: Editing PAD Result Codes Description: This defines the effect of editing buffered characters with the character delete and line delete functions. Parameter 20: Echo Mask Description: Bit-Masked Parameter. If parameter 2 is set to 1, this parameter lets you select the characters that are echoed. Parameter 21: Parity Treatment Description: This controls the parity and character format used by the terminal. Best left in the "OFF" condition. Parameter 22: Page Wait Description: This allows for pagination of data sent to the terminal. If the terminal display can display 20 lines and this parameter is set to 20, the PAD sends 20 lines of data then stops transmission. In Windows for Workgroups 3.11 and Windows NT 3.1 and 3.50 Pad.inf has the advantage over Switch.inf that it supports three macros (variables): X.121 Address, User Data, and Facilities that are getting their values from the X.25 user interface. This makes Pad.inf more user-friendly and secure because the password does not need to be entered permanently in the Pad.inf script like it would have to in the Switch.inf file. For example, calling a RAS server with an intermediary security device switched in-line where the security device requires a user ID and password in addition to the Windows NT RAS credentials becomes more secure and user-friendly by using the Pad.inf file rather than Switch.inf. In those cases where the password for the intermediary security device is changing every few seconds or minutes, the use of the variables in Pad.inf file is virtually the only feasible solution because quickly editing the Switch.inf file to enter the password, saving the changes and then starting RAS and making the call may take so much time that the password has changed again in the meantime. However, in Windows NT 3.51 and 4.0, Switch.inf supports two variables: Username and Password which are conveniently obtained through the familiar Windows NT logon dialog box. The Username and Password variables are not available in Windows for Workgroups 3.11, Windows NT 3.1 and 3.5 RAS. For more information, see your RAS manual and online help for X.25 topics. Note: Pad.inf was designed for X.25 connectivity. Although using Pad.inf with non-X.25 networks may work, it has not been tested by, and is not supported by Microsoft. The following list shows all RAS versions to-date - regardless of X.25 support - and whether they ship with a hard copy of the manual or only a softcopy manual or only online help. All RAS versions ship with online help, however, the early RAS versions do not contain nearly as much information in the online help as the later versions. The following RAS version ship with a hard copy of the manual: Remote Access for MS-DOS, versions 1.0, 1.1, 1.1a Remote Access for OS/2, version 1.0, 1.1 Windows NT operating system version 3.1 Windows NT Advanced Server version 3.1 Windows NT Workstation versions 3.5, and 3.51 Windows NT Server versions 3.5, and 3.51 The following RAS versions only have online help: RAS for Windows for Workgroups only has online help documentation. RAS for Windows 95 The following RAS versions ship online help and with a printable, electronic version of the manual on the compact disc. A hard copy of the manual can be purchased for a small fee: Microsoft Windows NT Workstation version 4.0 Microsoft Windows NT Server version 4.0 Note: The RAS server and RAS client online help may contain information on different topics. The Windows NT 3.51 Resource Kit Update 2 manual has a comprehensive RAS Reference that discusses all RAS versions up to Windows NT 3.51 and Windows 95. The RAS Troubleshooter on the Microsoft web at: Microsoft TechNet (Knowledge Base and White Papers) The Microsoft Knowledge Base on the web and on Microsoft TechNet: For More Information For the latest information on Windows NT Server, check out our World Wide Web site at or the Windows NT Server Forum on the Microsoft Network (GO WORD: MSNTS). This script language is not supported for use in the Windows NT Pad.inf file! Use it with RAS for PPP and SLIP dial-up networking, for example, to access the Internet via an Internet access provider. Using the script language in a *.scp script file for X.25 dial-up has not been tested by Microsoft and is therefore not supported, however, it may work. Also Windows 95 has not been tested with X.25 dial-up and this script language, however, it may work, too. 1.0 Overview 2.0 Basic Structure of a Script 3.0 Variables 3.1 System Variables 4.0 String Literals 5.0 Expressions 6.0 Comments 7.0 Keywords 8.0 Commands 9.0 Reserved Words. Scripts may contain variables. Variable names must begin with a letter or an underscore ('_'),: Type Description integer A negative or positive number, such as 7, -12, or 5698. String A series of characters enclosed in double-quotes; for example, "Hello world!" or "Enter password:". Boolean A System Variables System variables are set by scripting commands or are determined by the information your enter when you set up a Dial-Up Networking connection. System variables are read-only, which means they cannot be changed within the script. The system variables are: Name $USERID The user identification for the current connection. This variable is the value of the user name specified in the Dial-Up Networking Connect To dialog box. The password for the current connection. This variable is the value of the user name specified in the Dial-Up Networking Connect To dialog box. $SUCCESS This variable is set by certain commands to indicate whether or not the command succeeded. A script can make decisions based upon the value of this variable. $FAILURE This. Scripting for Dial-Up Networking supports escape sequences and caret translations, as described below. String Literal ^ char Caret translation If char is a value between '@' and '_',. <cr> Carriage return <lf> Linefeed \" Double-quote \^ Single caret \< Single '<' \\ Backslash transmit "^M" transmit "Joe^M" transmit "<cr><lf>" waitfor "<cr><lf>" An expression is a combination of operators and arguments that evaluates to a result. Expressions can be used as values in any command. An expression can combine any variable, or integer, string, or boolean values with any of the unary and binary operators in the following tables. All unary operators take the highest precedence. The precedence of binary operators is indicated by their position in the table. The unary operators are: Operator Type of Operation - Unary minus ! One's complement The binary operators are listed in the following table in their order of precedence. Operators with higher precedence are listed first: Operators Type Restrictions * / Multiplicative Integers + - Additive Integers, Strings (+ only) < > <= >= Relational == != Equality Integers, strings, booleans and Logical AND Booleans or Logical OR count = 3 + 5 * 40 transmit "Hello" + " there" delay 24 / (7 - 1) All text on a line following a semicolon is ignored. ; this is a comment transmit "hello" ; transmit the string "hello" Keywords specify the structure of the script. Unlike commands, they do not perform an action. The keywords are listed below. proc name Indicates the beginning of a procedure. All scripts must have a main procedure (proc main). Script execution starts at the main procedure and terminates at the end of the main procedure. endproc Indicates the end of a procedure. When the script is executed to the endproc statement for the main procedure, Dial-Up Networking will start PPP or SLIP. integer name [ =value ] Declares a variable of type integer. You can use any numerical expression or variable to initialize the variable. string name [ =value ] Declares a variable of type string. You can use any string literal or variable to initialize the variable. boolean name [ =value ] Declares a variable of type boolean. You can use any boolean expression or variable to initialize the variable. All commands are reserved words, which means you cannot declare variables that have the same names as the commands. The commands are listed below: delay nSeconds Pauses for the number of seconds specified by nSeconds before executing the next command in the script. delay 2 ; pauses for 2 seconds delay x * 3 ; pauses for x * 3 seconds getip value Waits for an IP address to be received from the remote computer. If your Internet service provider returns several IP addresses in a string, use the value parameter to specify which IP address the script should use. ; get the second IP address set ipaddr getip 2 ; assign the first received IP address to a variable szAddress = getip goto label Jumps to the location in the script specified by label and continues executing the commands following it. Example: waitfor "Prompt>" until 10 if !$SUCCESS then goto BailOut ; jumps to BailOut and executes commands ; following it endif transmit "bbs^M" goto End BailOut: transmit "^M" halt Stops the script. This command does not remove the terminal dialog window. You must click Continue to establish the connection. You cannot restart the script. if condition then commands endif Executes the series of commands if condition is TRUE. if $USERID == "John" then transmit "Johnny^M" endif label : Specifies the place in the script to jump to. A label must be a unique name and follow the naming conventions of variables. set port databits 5 | 6 | 7 | 8 Changes the number of bits in the bytes that are transmitted and received during the session. The number of bits can be between 5 and 8. If you do not include this command, Dial-Up Networking will use the properties settings specified for the connection. set port databits 7 set port parity none | odd | even | mark | space Changes the parity scheme for the port during the session. If you do not include this command, Dial-Up Networking will use the properties settings specified for the connection. set port parity even set port stopbits 1 | 2 Changes the number of stop bits for the port during the session. This number can be either 1 or 2. If you do not include this command, Dial-Up Networking uses the properties settings specified for the connection. set port stopbits 2 set screen keyboard on | off Enables or disables keyboard input in the scripting terminal window. set screen keyboard on set ipaddr string Specifies the IP address of the workstation for the session. String must be in the form of an IP address. szIPAddress = "11.543.23.13" set ipaddr szIPAddress set ipaddr "11.543.23.13" set ipaddr getip transmit string [ , raw ] Sends the characters specified by string to the remote computer. The remote computer will recognize escape sequences and caret translations, unless you include the raw parameter with the command. The raw parameter is useful when transmitting $USERID and $PASSWORD system variables when the user name or password contains character sequences that, without the raw parameter, would be interpreted as caret or escape sequences. transmit "slip" + "^M" transmit $USERID, raw waitfor string [ , matchcase ] [ then label { ,string [ , matchcase ] then label } ] [ until time ] Waits until your computer receives one or more of the specified strings from the remote computer. The string parameter is case-insensitive, unless you include the matchcase parameter. If a matching string is received and the then label parameter is used, this command will jump to the place in the script file designated by label. The optional until time parameter defines the maximum number of seconds that your computer will wait to receive the string before it execute the next command. Without this parameter, your computer will wait forever. If your computer receives one of the specified strings, the system variable $SUCCESS is set to TRUE. Otherwise, it is set to FALSE if the number of seconds specified by time elapses before the string is received. waitfor "Login:" waitfor "Password?", matchcase waitfor "prompt>" until 10 waitfor "Login:" then DoLogin, "Password:" then DoPassword, "BBS:" then DoBBS, "Other:" then DoOther until 10 while condition do endwhile Executes the series of commands until condition is FALSE. integer count = 0 while count < 4 do transmit "^M" waitfor "Login:" until 10 if $SUCCESS then goto DoLogin endif count = count + 1 endwhile ... The following words are reserved and may not be used as variable names. boolean databits delay do endif endproc endwhile even FALSE getip goto halt if ipaddr keyboard mark matchcase none odd off on parity port proc raw screen set space stopbits string then transmit TRUE until waitfor while The following are two Microsoft Knowledge Base troubleshooting articles that focus on RAS 1.0 and 1.1 installed on computers running Microsoft OS/2 1.3 and LAN Manager 2.1 or later. Part 1 (of 2)--Troubleshooting RAS on an OS/2 1.x Server [lanman] ID: Q98518 CREATED: 06-MAY-1993 MODIFIED: 26-JAN-1995 2.10 2.10a 2.20 MS-DOS PUBLIC | kbnetwork The information in this article applies to: Microsoft LAN Manager versions 2.1, 2.1a, and 2.2 Microsoft Remote Access Service versions 1.0 and 1.1 SUMMARY This is part), verify that the following elements are present--preferably BEFORE you run the RAS Setup program: 1. Serial Ports/Device Drivers One or more serial ports. One or more OS/2 serial port device drivers. Part 1 (the rest of this article) provides information on item 1: serial drivers and boards for ISA, EISA, MCA, Hewlett-Packard (HP) and 3Com computers, Digiboards, AST 4 port boards, and X.25 configurations. Part 2 of this article provides information on items 2-6 below: 2. Modems One or more modems. All supported modems are listed in the RAS 1.1 MODEMS.INF file. Unsupported modems may also work. 3. Serial Cable External modems require properly wired serial cables. 4. LAN Manager LAN Manager 2.1 or later with one protocol/network in addition to AsyBEUI. LAN Manager 2.1 or later must be installed with at least one network (for example, a loopback driver, or NetBEUI plus a MAC driver) in addition to the RAS AsyBEUI. This is because RAS 1.0 and 1.1 function as a gateway and expect another network to be present. 5. User-Level Security RAS MUST have user-level security--it does not support share-level security. 6. PDC, BDC, Member Server or Standalone Status A RAS server can be configured as a primary or backup domain controller, a member-server, or a standalone. Note: For details on topics other than SERIAL PORTS/DEVICE DRIVERS, refer to part 2 of this article, or query on the following words in the Microsoft Knowledge Base: Part 2 (of 2)--Troubleshooting RAS on an OS/2 1.x Server MORE INFORMATION 1. Serial Ports and OS/2 Serial Port Device Drivers Under OS/2 1.3, serial ports cannot be accessed without serial device drivers. Each serial driver/COM-port combination has its own hardware requirements and limitations, so they are discussed separately below. At least one serial port must be available and properly configured, but remember that COM ports have specific and sometimes unique OS/2 serial device driver requirements. Third party serial boards or proprietary built-in ports usually require their own device drivers. For example: Digiboards require XALL.SYS. You must install an OS/2 serial port device driver through the CONFIG.SYS file. Depending on the serial port hardware, you may also have to install proprietary device drivers. Serial Port/Driver Combinations 1.1 ISA and EISA (but not certain HP machines and 3COM 3Servers) machines: use COM01.SYS. COM01.SYS must be loaded in the CONFIG.SYS file. It supports ONLY serial ports COM1 and COM2 on ISA and EISA machines, NOT COM3 and COM4. SERIAL 1 COM1: I/O Address = 3F8h IRQ = 4 SERIAL 2 COM2: I/O Address = 2F8h IRQ = 3 Note: RAS requires that the serial port in use be configured as above. COM01.SYS does not perform a DosOpen call to the serial port until the serial port is actually used, so COM01.SYS loads during CONFIG.SYS time even if the port is misconfigured or there is an IRQ or I/O conflict between one of the COM ports and another device. For example: if a network card is configured for IRQ3, COM01.SYS loads but a system error "SYS 1620" occurs when a MODE COM2: command is issued. 1.2 Hewlett-Packard EISA machines: use COMHP01.SYS. (Note: Find out your HP model number--this may NOT apply to all models.) COMHP01.SYS supports COM1-COM4 with COM1 and COM2 configured as above, but you need to set up COM3 and COM4 with the following I/O addresses and IRQ settings: SERIAL 3 COM3 I/O Address = 3E8h, IRQ = 10 SERIAL 4 COM4 I/O Address = 2E8h, IRQ = 11 These are discussed (as is COMHP01.SYS) in the README.TXT file in C:\OS2\SUPPORT. 1.3 MCA machines: use COM02.SYS (COM01.SYS CANNOT be used): COM02.SYS must be used on Micro Channel machines and loaded in the CONFIG.SYS file. COM02.SYS supports serial ports COM1, COM2, and COM3. IRQ3 is shared by COM2 and COM3. COM1: I/O Address = 3F8h IRQ = 4 COM2: I/O Address = 2F8h IRQ = 3 COM3: I/O Address = 2F8h, IRQ = 3 On Micro Channel machines, COM2-8 are shared at IRQ3, and I/O ports 2F8, 3220 (hex), 3228, 4220, 4228, 5220, and 5228. In OS/2 1.3, however, COM02.SYS can support only COM1-COM3. Some add-in serial boards are supported so that you can get ports COM2 and/or COM3 on, for example, an IBM PS/2 Model 80. One supported add-in board is the IBM DUAL ASYNC adapter, which has two 9-pin serial ports built into it. 1.4 Computers with Digiboards (Micro Channel, ISA, or EISA): use XALL.SYS. For older products, use DGX.SYS. The information in the rest of this section was verified in April 1993: The OS/2 XALL.SYS device driver supports the entire line of Digichannel intelligent asynchronous serial communication controllers and must be loaded in the OS/2 Config.sys file. You can adjust its functionality with device line parameters. The OS/2 DGX.SYS device driver supports older non-intelligent serial Digiboards such as the PC4 board, and is also configured with the help of device line parameters. Note: Digiboard does NOT ship the XALL.SYS OS/2 driver with their hardware. You must order it by calling (612) 943-9020 or obtain it from their BBS at (612) 943-0812. (Communication settings: N,8,1; baud 300, 1200, 2400, 9600; V.32, V.42 and V.42bis standards are supported.) Note: On EISA bus machines with EISA Digiboards you must verify that the XALL.SYS parameter: /p:xxxx has a 4-digit I/O address, the first digit of which is the EISA Digiboard card's bus slot number. This first digit is often forgotten, which prevents the driver from loading properly. 1.5 AST 4-port serial board: use COM01A.SYS. This board is NOT supported by Microsoft, but no problems have been reported with it and we provide this information as a convenience. The driver is available on the Microsoft Web site at. Load it just as you would COM01.SYS. Microsoft has not tested this driver with OS/2 1.3 and therefore does NOT guarantee proper performance. 1.6 3COM 3Servers: 3S400 servers: use COM01S.400 3S500 AND 3S600 servers: use COM01S.500 The CONFIG.SYS file loaded by the LAN Manager installation tape has the appropriate 3COM serial port driver already referenced but still REMarked out so that it does NOT allow the serial port to be used. To make it usable, simply remove the REM on that line, save the file, shutdown and reboot your server. Note: On 3COM servers, only COM1 and COM3 are available--the COM2 port is reserved for the built-in Localtalk port. A 3Com RAS server can use only COM1 and COM3 unless a third party driver and serial port hardware (such as Digiboard) is installed to make COM ports above COM3 available. COM01S.400 and COM01S.500 expect the serial ports to be configured as follows (these are the defaults): COM1: I/O Address = 3F8h, IRQ = 4 COM3: I/O Address = 2F8h, IRQ = 3 Note: The 3Com upgrade toolkits for LAN Manager 2.1 and 2.2 contain a disk for installing RAS on 3servers. For 2.1, insert this disk once you start RemSetup (Remote Setup for 3 Servers--located in the LAN Manager directory on the 3server). Follow the same procedure for the 3COM upgrade toolkit for LAN Manager 2.2, but when you install RAS, insert the disk labeled "Services for Macintosh Remote Installation for 3Server"--the labels for the RAS and the Macintosh services are mixed up. Note: The "LAN Manager Installation and Configuration Guide" for 3Servers incorrectly assumes that a REMarked outline in the STARTUP.CMD file exists for the RAS. Please add the following line to the STARTUP.CMD file just below the group of similar lines: Call c:\lanman\3startms.cmd remoteaccess remoteaccess Note: If you configure RAS for more ports than are physically present (for example, you request a COM4 on a standard 3s500 system where only COM1 and COM3 exist) then the 3Server might hang when RAS is initialized. To cure such a problem, you must edit the STARTUP.CMD file and REMark out the RAS "Call" line. For information on how to do this, refer to the LAN Manager for 3Com servers documentation explaining the CONSOLE mode of 3Servers. 1.7 X.25 drivers and cards: use the vendor's X.25 card and driver. If your server is running RAS over an X.25 network, then in addition to the X.25 card driver RAS needs a COM01.SYS or other driver in order to function and recognize COM ports present on the server. For example: the X.25 card from Eicon, Inc. emulates a maximum of 13 COM ports (COM4- COM16) if there are three regular serial ports on the server already. Note: If more than 13 COM ports are configured, the RAS service terminates upon startup with a TRAP D during NET START REMOTEACCESS. The number of COM ports configurable also depends on other programs running simultaneously and competing for the same resources needed by the Eicon driver software, so if software of this type is running, RAS probably has to be configured for fewer ports before it can start successfully. Start out with 9 or 10 ports configured and then work your way up towards 13. Eicon software version 2 release 2 (v2r2) is out of date as of April 1993. Please upgrade to the latest version: version 3 release 1 (v3r1). For support with the Eicon driver installation or to upgrade to the latest version, please contact Eicon Customer Support Services at (514) 631-2592 (EST). For more information on debugging X.25 problems with RAS, refer to the RAS 1.1 Release Notes in the RAS retail package. For details on RAS requirements other than serial ports and device drivers, please see "Part 2 (of 2)-- Troubleshooting RAS on an OS/2 1.x Server." End of Part 1 of 2. REFERENCES RAS 1.1 Release Notes LAN Manager "Installation and Configuration Guide" "Remote Access Administrator's Guide," chapter 4, System Requirements Additional reference words: sfm 2.10 2.1 2.10a 2.1a 2.20 2.2 KBCategory: kbnetwork KBSubcategory: rmt Part 2 (of 2)--Troubleshooting RAS on an OS/2 1.x Server [lanman] ID: Q98517 CREATED: 06-MAY-1993 MODIFIED: 30-SEP-1994 2.10 2.20 MS-DOS PUBLIC | kbnetwork Summary: This is part at all) verify that the following elements are present--preferably BEFORE running the RAS Setup program: Part 1 (a separate article) provides information on item 1: serial drivers and boards for ISA, EISA, MCA, HP and 3Com computers, Digiboards, AST 4 port boards, and X.25 configurations. Part 2 (the rest of this article) provides information on the next five required items: All supported modems are listed in the RAS 1.1 MODEMS.INF file. Unsupported modems may also work. LAN Manager 2.1 or later with one protocol/network in addition to AsyBEUI. LAN Manager 2.1 or later must be installed with at least one network (for example: a loopback driver, or NetBEUI and a MAC driver) in addition to the RAS AsyBEUI. This is because RAS 1.0 and 1.1 function as gateways and expect another network to be present. Note: For details on Serial Ports/Device Drivers, refer to part 1 of this article, or query on the following words in the Microsoft Knowledge Base: Part 1 (of 2)--Troubleshooting RAS on an OS/2 1.x Server At least one modem must be available. RAS can use internal and external modems. Microsoft supports all modems listed in the RAS 1.1 MODEMS.INF file at a baud rate not to exceed the rate listed with the "MAXBAUD=" parameter in each modem's section. The MODEMS.INF file is the current listing. An unsupported modem MAY work if you: Choose a supported modem in RAS Setup that is emulated by the unsupported modem. Create its own MODEMS.INF file section containing the proper commands. To do this, refer to these sources: RAS 1.1 README.TXT file section "Using Non-Supported Modems" and "Modem Initialization String." "RAS Administrator's Guide" Appendix A, under "Adding a New Modem to MODEMS.INF." The modem manufacturer's modem manual (for the correct codes for the "COMMAND=" line in the MODEMS.INF file). Note: A Modem does not have to be hooked up to a serial port in order for the "NET START REMOTEACCESS" server service to load. Also the RASADMIN utility starts properly (if everything else is set up correctly) and shows the RAS server as running. However, if you access the COM port status screen through the RASADMIN "Server" menu, by selecting "Communication Ports" and then "Port Status," you will see these text strings: During Initialization: Line condition: "Initializing modem" Modem condition: "Unknown" Unknown/No modem: "Line non-operational "Modem not responding" "Hardware failure" "Hardware failure" means that the modem failed for some reason after a "No Errors" condition. Turning the modem off could also cause this condition. RAS recognized the modem's response: "Waiting for Call" "No Errors" External modems require properly wired serial cables. For wiring diagrams, see the RAS 1.0 manual, Appendix A, pages 80, 81, or the RAS 1.1 manual, Appendix A, pages 92 and 93. Note: Serial mouse adapter cables are usually NOT wired correctly for modem communication purposes and should not be used. 4. LAN MANAGER LAN Manager 2.1 (or later) with one protocol/network in addition to AsyBEUI. If the loopback driver is not used, RAS requires that you install a network adapter card that uses a certified Network Driver Interface Specification (NDIS) driver in addition to NetBEUI (or other protocol). User-level security is essential, because RAS relies on the User Accounts Subsystem database for keeping track of user names, passwords, and RAS permissions. Even so, users who are logged on to a RAS server can access LAN resources that have share-level security. For more information see the "RAS Administrator's Guide," chapter 2, "User-level Security." 6. RAS Server Configured as PDC, BDC, Member Server, or Standalone: If the RAS server is supposed to be separate from other domains or simply a non-networked machine, the easiest choice is primary domain controller." (See LAN Manager 2.1 "Administrator's Guide," Chapter 4, page 62: "Changing a Server's Role"). Note: "Error 67" when starting RASADMIN on a standalone server. Even if the server is configured as "standalone," RASADMIN first tries to verify your administrator privilege by finding a primary domain controller with the domain name specified in the "domain = " line of the LANMAN.INI [workstation] section. Most standalone configurations have no valid domain with that name and RAS issues the message: Error 67: This network name cannot be found. If LANMAN.INI specifies a valid domain name where you also have an administrator account with the same password as in your standalone RAS server's user accounts database, RASADMIN starts, but it focuses on the other machine instead of the local standalone server. However, depending on the circumstances, RASADMIN issues other errors such as: Error 2320: The computer isn't active on this domain Error 5: Insufficient privilege If you receive these errors, choose OK, then type in the dialog box the standalone RAS server's computer name as it appears in the LANMAN.INI [workstation] section (for example: computername=rasserver) preceded by two backslashes: \\rasserver (then press ENTER) The RAS server service should now be properly installed. Note: In LAN Manager 2.1a and later, the hardcoded domain name "standalone" allows users to log on faster if validation by a domain controller is bypassed. RASADMIN of RAS version 1.x is not aware of this feature and still responds with the errors shown above even if "domain = standalone" is specified in LANMAN.INI. LAN Manager 2.1 "Administrator's Guide," "Changing a Server's Role," page 62 Remote Access "Administrator's Guide," Appendix A, "Adding a New Modem to MODEMS.INF"; chapter 2, "User-level Security" RAS 1.0 manual, Appendix A, pages 80, 81 RAS 1.1 manual, Appendix A, pages 92, 93 KBCategory: kbnetwork KBSubcategory: Additional reference words: 2.10 2.10a 2 third-party products discussed here are manufactured by vendors independent of Microsoft; we make no warranty, implied or otherwise, regarding these products' performance or reliability. The third-party contact information included in this article is provided to help you find the technical support you need. This contact information is subject to change without notice. Microsoft in no way guarantees the accuracy of this third-party contact information. THESE MATERIALS ARE PROVIDED "AS-IS," FOR INFORMATIONAL PURPOSES ONLY. NEITHER MICROSOFT NOR ITS SUPPLIERS MAKE LOSS OF.
http://technet.microsoft.com/en-us/library/cc751443.aspx
crawl-002
refinedweb
21,776
67.86
Help with simple codeThanks everyone, and sorry about the code tags (or lack thereof). I was able to figure out the prob... Help with simple code#include <iostream> #include <string> #include <cmath> using namespace std; void displayTotals(int... Help with simple code#include <iostream> #include <cmath> #include <string> using namespace std; int main() { string we... Help with simple codeI'm a bit confused by what values are to be added to the "desired quantities" array. - An array fo... Help with simple codeThanks! I was just having problems using the program idea and incorporating everything that she aske... This user does not accept Private Messages
http://www.cplusplus.com/user/travyes1/
CC-MAIN-2013-48
refinedweb
104
68.67
Tell us what you think of the site. Is there a way for me to get a selection and keep the selection order that the user made? When i query the selection list, it automatically sorts the list by the scene order. So if character A is merged into the scene after character B, even if i select A first and then B, it returns B, then A. currently i am using selectedModels = FBModelList() We dont keep the selection order in the FBModelList. I dont know if this can help, but python list can be sorted… Ie: selectedModels.sort(lambda x,y: cmp(x.Name,y.Name)) CHARLES PAULIN | SQA AUTOMATION ANALYST AUTODESK Media & Entertainment Thanks for the reply, I dont think that would do anything in this case, as i am looking for the users specific order of operations, not the alphabetized version. I just dont have any way of telling which item they selected first and which they selected second. Did anyone ever find a solution to this problem. I’m working on a constraint script but to find which object is the child and parent requires a selection order. I could just have the user load each object into the GUI into a text field but that’s about the same as doing it the regular way in Motion Builder. Not much of a solution, but more of a hack: def getSelection(): lModelList = FBModelList() FBGetSelectedModels( lModelList ) selectedModels = [] selectedNames = [] for l in lModelList: selectedModels.append(l) selectedNames.append(l.Name) return selectedModels, selectedNames def getOrderedSelection(): lUndo = FBUndoManager() sel, selNames = getSelection() selectionSize = len(sel) orderedSeletion = [] if len(sel): for i in range(selectionSize): lUndo.Undo() newSel, newNames = getSelection() for s in range(selectionSize): if selNames[s] not in newNames: orderedSeletion.append( sel.pop(s) ) selNames.pop(s) break for i in range(selectionSize): lUndo.Redo() orderedSeletion.reverse() return orderedSeletion sel = getOrderedSelection() I’ve had this in my snippets for a while. I copied the ol’ undo trick for ordered component selection from Maya that Anders Egleus wrote. It requires that you pick one node at a time though. Maybe useful? Stev That is a very nice hack. I was thinking and wanted to look at scene events, but this is for sure better idea! Thanks for sharing! Wow, way to get creative! Too bad we need to go through this method.
http://area.autodesk.com/forum/autodesk-motionbuilder/python/selection-order/page-last/
crawl-003
refinedweb
392
56.15
. Warren Carter THE ROMAN EMPIRE AND THE NEW TESTAMENT AN ESSENTIAL GUIDE Copyright 2006 by Abingdon Press can be addressed to Abingdon Press, P.O. Box 8 0 1 , 201 Eighth A v e n u e South, Nashville, TN 3 7 2 0 2 - 0 8 0 1 , or emailed to permissions@abingdonpress.com. This book is printed on acid-free paper. Library of Congress Cataloging-in-Publication Data Carter, Warren, 1955The Roman Empire and the New Testament: an essential guide / Warren Carter, p. cm. Includes bibliographical references (p. ). ISBN 0-687-34394-1 (binding-pbk.: alk. paper) 1. Bible. N.T.Criticism, interpretation, etc. 2. Bible. N.T.History of contemporary events. 3. Church historyPrimitive and early church, ca. 30-600. 4. RomeSocial life and customs. 5. RomePolitics and government30 B . c - 2 8 4 A.D. 6. RomeReligion. I. Title. BS511.3.C38 2006 225.9'5dc22 2006004350 All scripture quotations unless noted otherwise are taken from the New Revised Standard Version of the Bible, copyright 1989, by the Division of Christian Education of the National Council of the Churches of Christ in the United States of America. Used by permission. All rights reserved. Scripture quotations marked RSV are from the Revised Standard Version of the Bible, copyright 1946,1952, 1971 by the Division of Christian Education of the National Council of the Churches of Christ in the United States of America. Used by permission. All rights reserved. 06 07 08 09 10 11 12 13 14 1510 9 8 7 6 5 4 3 2 1MANUFACTURED IN THE UNITED STATES OF AMERICA Contents Introduction 1. The Roman Imperial World 2. Evaluating Rome's Empire 3. Ruling Faces of the Empire: Encountering Imperial Officials 4. Spaces of Empire: Urban and Rural Areas 5. Temples and "Religious"/Political Personnel 6. Imperial Theology: A Clash of Theological and Societal Claims 7. Economics, Food, and Health 8. Further Dynamics of Resistance Postscript Bibliography Bibliography of Classical Works Cited ix 1 14 27 44 64 Introduction his b o o k e x p l o r e s w a y s in w h i c h N e w T e s t a m e n t writers interact with and negotiate the Roman imperial world. This book is not about "Roman backgrounds" to the New Testament, because it understands Rome's empire to be the foreground. It is the world in which first-century Christians lived their daily lives. It is the world that the New Testament writings negotiate throughout.. This book clearly rejects the notion that Jesus and the New Testament writings are not political. When Jesus declares, "My kingdom is not from this world" (John 18:36), he does not mean, as many have claimed, that Jesus doesn't care about Rome's empire or is only interested in "spiritual" realities. His claim is about the origins of his reign or empire as being from God, not a statement about what it influences or how far it extends. The New Testament writings are clear that God's good and just purposes ix embrace all of life. They are also clear that negotiation with Rome's world is determined by the fact that Rome crucified Jesus. People got crucified not because they were spiritual, but because they posed a threat to the Roman system. First-century Christians did not negotiate the empire in the context of empire-wide persecution. There was no such empire-wide, empire-initiated persecution against Christians until the third century. They did experiencefor what reasons?some local harassment and opposition. Early Christians and New Testament writers engaged the empire largely "from below" as the powerless and oppressed who had no access to channels of power, no voice, and no hope of changing the imperial system. This book looks at some of the diverse ways in which they negotiated a world in which imperial politics, economics, culture, and religion were bound up together. Several options existed for organizing the book. I could have written chapters on each of a number of New Testament writings. I could have organized it by particular strategies. Instead, I have chosen to organize it around important imperial realities that New Testament writings negotiate. This organization highlights significant aspects of the empire that the New Testament writings negotiate. It also provides the opportunity to observe different ways of negotiation evident in various New Testament writings. Chapter 1 describes the Roman imperial system. Chapter 2 discusses some e v a l u a t i o n s of this s y s t e m offered b y N e w Testament writings. Chapter 3 identifies interactions with powerful imperial officials ("faces of empire"). Chapter 4 examines countryside and cities as places that express Roman power and as places in which Christians negotiate that power. Chapter 5 asks similar questions about temples. Chapter 6 considers ways in which claims about God and Jesus engage and contest Roman imperial theology. Chapter 7 looks at ways Christians negotiated basic daily matters that express imperial power, namely economics, food, and health. Chapter 8 explores three further forms of resistance to Rome's empireimagined violence, disguised and ambiguous protest, and flattery. This discussion is illustrative and not exhaustive. Regrettably, limits on the book's length, appropriate to the Essential Guides series, prevent printing all the New Testament x passages that I discuss. It would be helpful for readers of this book to refer to those New Testament passages while reading this book. The book can be used as a seminary or college course textbook, or by a church Bible study group or Sunday school class. It provides passages to study, insights about them, and raises questions for consideration. It provides the basis for asking hard questions about how contemporary Christians negotiate our own contexts of imperial power. The book is also a resource book for clergy and scholars interested in this emerging area of contemporary scholarship. A bibliography provides further resources for exploration as well as acknowledges some of the vast debt I owe to numerous, previous insightful studies. Limits on the book's length prevent acknowledging this debt in extensive endnotes. xi CHAPTER 1 he New Testament texts, written in the decades between 50 and 100 in the first century, originate in a world dominated by the Roman Empire. In places, New Testament texts refer openly to this imperial world and its representatives such as emperors (Luke 2:1), provincial governors (Mark 15:25-39), and soldiers (Acts 10). In places, as we shall see, New Testament writers speak critically about this imperial world. In places, they seem to urge cooperation with Rome. "Fear God. Honor the emperor" (1 Pet. 2:17). But in most places they do not seem to us to refer to Rome's world at all. Jesus calls disciples from fishing. Jesus heals the sick. Paul talks about God's righteousness or justice and human faithfulness. None of this appears to us to have anything to do with Rome's empire. Throughout this book two issues will concern us. The first involves recognizing that the New Testament texts assume and engage R o m e ' s world in every chapter. Even when the New Testament texts seem to us to be silent about Rome's empire, it is, nevertheless, ever present. It has not gone away.. 1 And second, we will see that New Testament writers evaluate and engage Rome's empire in different ways. This variety and diversity of engagement will emerge in each chapter of this book. At least two factors hide this Roman imperial world from us as twenty-first-century readers. The first factor concerns the relationship between religion and politics. We often think of religion and politics as separate and distinct. Religion is personal, individual, private. Politics is societal, communal, public. Of course, just how separated religion and politics really are is debatable (think of the "political" slogan, "God bless America," or of those who seek martyrdom in the name of Islam). But in the first-century Roman world, no one pretended religion and politics were separate. Rome claimed its empire was ordained by the gods. Those whom we think of as religious leaders in Jerusalem, such as chief priests and scribes, were actually the political leaders of Judea and allies of Rome (Josephus, Ant. 20.251). We will continually explore this intermixing of politics and religion. The second factor recognizes that as twenty-first-century readers, we often lack knowledge of Rome's imperial world. This lack of knowledge is very understandable because our world differs in significant ways from the imperial world in which the New T e s t a m e n t t e x t s c a m e into b e i n g t w o t h o u s a n d y e a r s a g o . Understanding Rome's world, though, matters for reading the New Testament texts because these texts assume that readers know how the Roman world was structured and what it was like. The texts don't stop to explain it to us. They don't spell it out for us. Instead we are expected to supply the relevant knowledge. We are expected, for example, to know that when Jesus calls Galilean fishermen to follow him (Mark 1:16-20), fishing and fishermen were deeply e m b e d d e d in the R o m a n imperial system. The emperor was considered to be sovereign over the sea and landa sovereignty expressed in fishing contracts and taxes on the catch. Jesus' call to James, John, Andrew, and Simon Peter redefines their relationship to and involvement in Rome's world. It is reasonable to expect first-century folk to supply the information that the texts assume, since these folk shared the same world as the authors. But it is difficult for us who read them some two millennia later and in a vastly different world. Without under2 standing the Roman imperial world, we will find it hard to understand the New Testament texts. As a first step toward gaining some of this assumed knowledge, I will sketch the structure of the Roman Empire. In the next chapter, I will describe some of the ways that the New Testament texts evaluate Rome's empire. In subsequent chapters I will elaborate specific aspects of Rome's world and ways in which the New Testament writers negotiate it. catch, crop, or herd. To not pay taxes was regarded as rebellion because it refused recognition of Rome's sovereignty over land, sea, labor, and production. R o m e ' s military retaliation was inevitable and ruthless. The Roman Empire was also a legionary empire. In addition to controlling resources, the elite ruled this agrarian empire by coercion. The dominant means of coercion was the much vaunted Roman army. In addition, the elite controlled various forms of communication or "media," such as the designs of coins, the building of monuments, and construction of various buildings. These means communicated elite Roman values and shaped perceptions. Networks of patronage, and alliances between Rome and elites in the provinces, also extended control, maintained the status quo, and enforced the elite's interests. It is this hierarchy and control that Jesus describes negatively, "You know that the rulers of the Gentiles lord it over them, and their great ones are tyrants over them" (Matt. 20:25). Military ForceRome's empire was a legionary empire. Emperors needed loyal legions, the army's basic organizational unit, to exercise sovereignty, enforce submission, and to intimidate those who contem4 plated revolt. Several emperors such as Vespasian in 69 gained power by securing support from key legions. In the first century, there were approximately 25 legions of about 6,000 troops. Legions included large numbers of provincial recruits. Along with actual battles, the use of "coercive diplomacy" (the presence of the legions throughout the empire and the threat of military action) ensured submission and cooperation. Legions also spread Roman presence by building roads and bridges, and improved productivity by increasing available land through clearing forests and draining swamps. Armies needed food, housing, and supplies of clothing and equipment for war. One source of such supplies was taxes and special levies, for example, on grain or corn from the area in which the legion was based. Elite Roman power was secured through the military at the expense of nonelites. Elite AlliancesEmperors ruled in relationship with the elite, both in Rome and in the provinces' leading cities. Rome made alliances with client kings, like King Herod, who ruled with Rome's permission and promoted Rome's interests. The elite, with wealth from land and trade, provided the personnel that filled various civic and military positions throughout the empire, such as provincial governors, magistrates and officials, and members of local city councils. These positions maintained the empire's order and hierarchical structure that benefited the elite so much. Relationships between the emperor and the elite were complex. Since the rewards of power were great, these relationships usually combined deference to the emperor, interdependence, competition for immense wealth and power, tension, and mutual suspicion. In Rome, power was concentrated in the Senate, which comprised some six hundred very wealthy members. It had responsibility for legislation and oversaw its members' rule exercised through various civic and military positions. The Senate included both Romans and elite provincials appointed by the emperor. Senators were the foremost elite level, but the elite also comprised two further levels based on somewhat lower amounts of nevertheless substantial wealth, the equestrians and the 5 decurions. Members of these orders also filled civic and military positions throughout the leading cities of the empire. Appointees carried out their offices with continual reference to the emperor in Rome. Pliny, the governor of Bithynia-Pontus on the north coast of Asia Minor in 109-111 CE, writes some 116 letters to the emperor Trajan seeking the emperor's advice on various administrative matters: securing prisoners; building bathhouses; restoring temples; setting up a fire brigade; determining memberships of local senates; making legal decisions; constructing canals, aqueducts, and theaters; granting Roman citizenship, and asking what to do about Christians who had been reported to him. Pliny's letters show his deference and orientation to doing the emperor's will. The emperor's responses make his will present in the province. To secure appointments to such prestigious and enriching offices, members of the elite needed the e m p e r o r ' s favor or patronage. They competed for favor with displays of wealth, civic commitment, and influence. These displays might involve military leadership, funding a festival or entertainment, building a fountain or bathhouse or some other civic building, supplying a food handout, or sponsoring the gatherings of a trade or religious group. These acts of patronage publicly displayed an elite person's wealth and influence as well as loyalty to the emperor and active support for the hierarchical status quo. Acts of patronage also increased social prestige by creating lower-ranked clients who were dependent on elite patrons. The emperor rewarded such displays of civic good deeds (called euergetism) with further opportunities to exercise power and gain wealth by appointments to civic or military offices. Emperors who did not take partnership with Roman and provincial elites seriously and were unwilling to share with them the enormous benefits of power and wealth, usually met a grisly end. Amid various power struggles, several emperors were murdered, including Caligula (37-41), Claudius (41-54), Galba ( 6 8 69), Vitellius (69), and Domitian (81-96). Others such as Nero (54-68) and Otho (69) committed suicide. Civil war in 68-69 saw four emperors (Galba, Otho, Vitellius, Vespasian), backed by various legions, claim supreme power for short periods of time. The victor Vespasian (69-79) provided some stability with two sons 6 who succeeded him, Titus (79-81) and Domitian (81-96). The New Testament Gospels were written during these decades. Mark was probably written around 70, with Matthew, Luke, and John being written in the 80s or 90s. Divine SanctionIn addition to ownership of resources, military force, and working relationships with the elite, emperors secured their power by claiming the favor of the gods. Their imperial theology proclaimed that Rome was chosen by the gods, notably Jupiter, to rule an "empire without end" (Virgil, Aeneid 1.278-79). Rome was chosen to manifest the gods' rule, presence, and favor throughout the world. Religious observances at civic occasions were an integral part of Rome's civic, economic, and political life. Individual emperors needed to demonstrate that they were recipients of divine favor. Various accounts narrate amazing signs, dreams, and experiences that were understood to show the gods' election of particular emperors. For example, there was a struggle for succession after Nero's suicide in 68. In the ensuing civil war, three figures (Galba, Otho, Vitellius) claimed power for short periods of time before Vespasian emerged as the victor. In sustaining Vespasian's rule, Suetonius describes a dream in which Nero sees Jupiter's chariot travel to Vespasian's house (Vespasian 5.6). The dream presents Vespasian as Nero's divinely legitimated successor. In a similar vein, Tacitus describes the gods deserting Emperor Vitellius to join Vespasian, thereby signifying their election of Vespasian (Histories 1.86). The gods' continuing sanction for emperors was both recognized and sought in what is known as the imperial cult, which was celebrated throughout the empire. The "imperial cult" refers to a vast array of temples, images, rituals, personnel, and theological claims that honored the emperor. Temples dedicated to specific emperors and images of emperors located in other temples were focal points for offering thanksgiving and prayers to the gods for the safekeeping and blessing of emperors and members of the i m p e r i a l h o u s e h o l d . I n c e n s e , s a c r i f i c e s , and a n n u a l v o w s expressed and renewed civic loyalty. The related street processions and feasting, often funded by elites, expressed honor, gratitude, 7 and commemoration of significant events such as an emperor's birthday, accession to power, or military victories. Acts of worship were also incorporated into the gatherings of groups such as artisan or religious groups. Elites played a prominent role in these activities, sponsoring celebrations, maintaining buildings, and supplying leadership for civic and group celebrations. These diverse celebrations presented the empire presided over by the emperor as divinely ordained. They displayed and reinforced the elite's control. They invited and expressed, encouraged and ensured the nonelite's submission. Participation in the imperial cult was not compulsory. Its celebration was neither uniform across the empire nor consistent throughout the first century. Whereas in many cities sacrifices and incense were offered to the emperor's image, in the Jerusalem Temple, for example, daily sacrifices and prayers were offered for the emperor but not to his image. Although participation was not required, it was actively encouraged, often by local elites who funded such activities and buildings and who served as priests or leaders of imperial celebrations. Elite men and women served as priests for the imperial cult (and for numerous other religious groups also) because they could fund the celebrations and gain societal prestige and personal power from it. Such priestly activity, eligible for both men and women, was not a lifetime vocation requiring seminary training and/or vows of celibacy. Rather, good birth, wealth, social standing, and a desire to enhance one's civic reputation were needed. Elite ValuesWith the emperor, members of the elites created, maintained, and exercised power, wealth, and prestige through crucial roles: warrior, tax collector, administrator, patron, judge, priest. These roles exemplify key elite values. Domination and power are foremost, pervading the societal structure. These values were celebrated, for example, in the elaborate "Triumph" that took place in Rome when a victorious general entered the city, displaying booty and captives taken in battle, parading the captured enemy leader, executing 8 him, and offering thanks to Jupiter for Rome's victory. The Triumph, such as that celebrating R o m e ' s destruction of Jerusalem in 70 CE, paraded Rome's military might, conquering power, hierarchical social order, legionary economy, and divine blessing. Elites valued civic display through civic and military offices, p a t r o n a g e , and e u e r g e t i s m ( " g o o d civic a c t i o n s " ) that enhanced their honor, wealth, and power. Their civic leadership enacted a proprietary view of the state. Contributions to society were not exercised for the maximum common good but for personal privilege and enrichment and, in turn, for the good of their heirs. These acts maintained, not transformed, political, economic, and societal inequality and privilege. Elites exhibited contempt for productive and manual labor. Elites did not perform manual labor but they depended on and benefited from the work of others such as peasant farmers and artisans. Slaves were an integral part of the Roman system. They were a relatively cheap and coerced source of labor whose productivity enriched the elite. Slaves provided physical strength as well as highly valued skills in education, business, and medicine. They performed all sorts of roles: hard physical labor of working the land, domestic service, meeting the sexual needs of their owners, educating elite children, and being business and financial managers of a master's estates and commercial affairs. The imposition and collection of taxes on productive activity (farming, fishing, mining, and so forth) also expressed this contempt for labor, while ensuring the elite a constant source of income without requiring their labor. This value clearly distanced the elite from the rest. A fourth value concerned conspicuous consumption. Elites displayed their wealth in housing, clothing, jewelry, food, and ownership of land and slaves. They also displayed it in various civic duties: funding feasts, games, and food handouts; presiding at civic religious observances; building civic facilities; erecting statues; and benefiting clients. They could afford such displays because taxes and rents provided a constant (coerced) source of wealth. The overwhelming power to extract wealth from the nonelite by taxes made the need to accumulate or invest wealth largely obsolete. 9 A fifth value concerns a sense of superiority. This value was sustained by and expressed through the ability to subject, coerce, exploit, and extract wealth. Rome was divinely destined to rule. Others such as "Jews and Syrians were born for servitude," according to Cicero (De provinciis consularibus 10). According to Josephus, the future emperor Titus urges his troops to victory over Judeans by claiming that they are "inferior" and have "learned to be slaves" (Josephus, JW 6.37-42). Rome was superior to provincials, the wealthy and powerful elite to the nonelite, males to females. The NoneliteI have concentrated so far on the ruling elite, especially the hierarchical societal structure that they maintained and from which they benefited immensely. This is the world that most of the population, the nonelite, negotiated every day. Since the nonelite comprised about 97 percent, con10 taminated. I will consider urban life further in chapter 4, and food shortages and disease in chapter 7. _____ defiant provoke harsh retaliation, protests among dominated groups are hidden or "offstage." Apparently compliant behavior can be ambiguous. It can mask and conceal nonviolent acts of protest. Often protest is disguised, calculated, self-protective. It may comprise telling stories that offer an alternative or counterideology to negate the elite's dominant ideology and to assert the dignity or equality of nonelites. It may involve fantasies of violent revenge and judgment on elites. It may imagine a reversal of roles in favor of nonelites. It may employ coded talk with secret messages of freedom ("the reign of God") or "double-talk" that seems to submit to elites ("Pay to Caesar the things that are Caesar's") but contains, for those with ears to hear, a subversive message ("and to God the things that are God's"). It may reframe an elite action intended to humiliate (such as paying taxes) by attributing to it a different significance that dignifies the dominated. It may create communities that affirm practices and social interactions that differ from domination patterns. A scholar, James Scott, sums up this sort of protest with a proverb from Ethiopia: the general (or emperor or landowner or governor or master) passes by, the peasant bows, and passes gas. Bowing seems to express appropriate deference. But apparent compliance is qualified by the offensive and dishonoring act of passing gas. This nonviolent act is hidden, though, disguised, anonymous, shielding the identity of the one who dissents. The action is not going to change the system, but it does express dissent and anger. It affirms the peasant's dignity as one who refuses to be completely subjected. It attests a much larger web of protest against and dissent from the elite's societal order and version of reality. This web of protest has been called a "hidden transcript." It offers a vision of human dignity and interaction that is an alternative to the elite's "public transcript" or official version of how society is to be run. The New Testament writings can, in part, be thought of as "hidden transcripts." They are not public writings targeted to the elite or addressed to any person who wants to read them. They are written from and for communities of followers of Jesus crucified by the empire. The New Testament writings assist followers of Jesus in negotiating Rome's world. Because of their commitment to J e s u s ' teaching and actions, they frequently dissent from 12 Rome's way of organizing society. Often, though not always, they seek to shape alternative ways of being human and participating in human community that reflect God's purposes. Often, though not always, they offer practices and ways of living that often differ significantly from the domination and submission patterns of Rome's world. Often, though not always, they provide different ways of understanding the world, of speaking about it, of living and relatingall the while rejecting options of total escape from or total compromise with Rome's empire. This diverse and varied negotiation is the subject of this book. 13 CHAPTER 2 n chapter 1,1 described the hierarchical structure of the Roman Empire, which benefited the ruling elite at the expense of the nonelite. I also identified a number of ways in which this elite secured and enhanced its power, status, and wealth: 1. Political office. Elites controlled all political office, including civic and military positions, for their own benefit, not for the common good. 2. Land ownership. Elites controlled large areas of land. Land was basic for wealth. Elites also participated in trade by sea and land. 3. Cheap labor, whether slaves, day laborers, artisans, or peasant farmers, produced goods largely for elite consumption. 4. Taxes, tributes, and rents, usually paid in goods (and not by check or credit card), literally transferred wealth from the nonelite to the elite. 5. Military power gained territory, extended domination, and enforced compliance. Its rumored efficiency or brutality deterred revolts. 6. Patron-client relations. A complex system of elite patrons and dependent clients from the emperor down displayed wealth and power to enhance elite status, build dependency, and secure 14 loyalty, dependence, and submission from nonelites. Competition for power and status among elites required displays of wealth and influence in various acts of (self-benefiting) civic leadership. 7. Imperial theology. Rome claimed election by the gods to rule an "empire without end" and to manifest the gods' will and blessings. Offerings to images of imperial figures and street festivals celebrated Rome's power and sanctioned its hierarchical societal order. 8. Rhetoric. While Rome's army coerced compliance, speeches at civic occasions and various forms of writings (history, philosophy, and so forth) persuaded nonelites to be compliant and cooperative. 9. Legal system. Rome's legal system exercised bias toward the elite and against the rest. It protected elite wealth and status, and employed punishments appropriate not to the crime but to the social status of the accused. 10. Cities. Urban centers displayed Roman elite power, wealth, and status, and extended control over surrounding territory. In this chapter we will look at how the New Testament writers evaluate this Roman imperial world. There are numerous options open to them. They could be so heavenly minded that they take no interest in it. They could be so happy to submit to it that they simply assume its existence without asking any questions about it. They could understand it as ordained by God and passively comply. They could be so opposed to it, so persuaded that it is demonic and beyond all hope that they look only to God's future. How do New Testament writers think about this world? What perspectives do they use to evaluate it? O n e i m p o r t a n t s o u r c e of p e r s p e c t i v e s a v a i l a b l e to N e w Testament writers is the Hebrew Bible. New Testament writers know traditions about God's life-giving creation of a good world. They are familiar with Israel's long history of struggles with imperial powers, whether Egyptian, Assyrian, Babylonian, Persian, or Hellenistic. They are familiar with the central events of exodus from Egypt and exile to and return from Babylon. They also know about God's commitment to justice for all, expressed, for example, through a righteous king (Ps. 72). They know traditions about Jesus' ministry in which he was crucified by Rome. These traditions often frame their evaluation of Rome's world. The writers 15 are not so "spiritually" focused or "heavenly minded" or "religious" as to claim that God is not interested in daily life in Rome's world. Rather, they evaluate Rome's world in relation to God's life-giving purposes. They place Rome's world in theological perspective and offer various theological verdicts on it. We will look at five quite different evaluations that form a spectrum of ways of thinking about Rome's world. Subsequently, we will identify particular strategies or behaviors for daily living that these evaluations suggest. 16 Both accounts identify the devil as controlling the world's empires (of which Rome in the first century CE is foremost). Both present the devil as having the power to allocate the world's empires as the devil wishes. Rome, therefore, is in the devil's control. The devil is the power behind the Roman throne. By contrast, Jesus manifests God's kingdom or empire (Mark 1:15; Matt. 4:17; Luke 4:43). Referring to God's "kingdom" or "empire" or "reign," he uses in these verses the same word that the devil uses for the w o r l d ' s " k i n g d o m s " or " e m p i r e s " in Matthew 4:8-9 and Luke 4:6-7. The use of the same word highlights the contrast and opposition between the two entities. Jesus asserts God's claim of sovereignty over the world under Satan's control and manifested in Rome's rule. In Jesus' exorcisms, for example, Jesus literally "throws out" the evil spirits, exhibiting God's reign to be victorious over Satan's reign (Matt. 12:28). Mark shows Rome's empire to be of the devil in the story of the man possessed by a demon (Mark 5:1-20). The demon's name is "Legion," the central unit of Rome's military. The possessed man's life is marked by death (5:3); by a lack of control (5:3); unshackled power (5:3-4); and violent destruction (5:5), hardly a flattering picture of Rome's power. Jesus reveals the power of the demon in addressing it (5:8) and having it identify itself as "Legion" (5:9). The demon begs Jesus not to send it out of the country that they occupy (5:10). Instead, Jesus casts it into a herd of pigs that destroys itself in the sea (5:13). Significantly, the mascot of Rome's tenth Fretensis legion that destroyed Jerusalem in 70 (about the time Mark was written) was the pig. The scene shows J e s u s ' power over Rome and the latter's destruction. Mark's exorcism scene presents the might of Rome as an expression of demonic power, as wrecking havoc and destruction, but as subject to God's purposes expressed in Jesus. Its removal means people can again be "clothed and in [their] right mind" (5:15). It can be noted that studies of oppressive and imperial contexts commonly show significant increases in psychosomatic illnesses and behavior attributable to demonic possession. The book of Revelation also presents Rome's empire as expressing the d e v i l ' s p o w e r and o p p o s i n g G o d ' s good p u r p o s e s . Revelation 12 reveals that the devil, "a great red dragon" (12:3) and "deceiver of the whole world" (12:9), actively opposes the 17 church (12:17). In chapter 13, this dragon gives his "power and his throne and great authority" to a beast from the sea (13:12). This is the Roman Empire to whom the devil gives dominion over earth's inhabitants who worship it (13:1-10). The beast opposes God and God's people (13:6-7). Moreover, a second beast emerges who acts on behalf of the first beast. It requires worship of the first beast. It also exerts control over economic interaction among the "small and great, both rich and poor, both free and slave" by marking them on the hand or forehead (13:16). The mark signifies ownership, reminiscent of the marking of slaves (contrast the marking of God's people, 7:2-4). It indicates that all are slaves of the beasts and dragon. That is, the chapter reveals Rome's political-economicreligious system to represent the devil's rule, to be antithetical to God's purposes, and to be an enslaving system. that the empire had brought "peace and security," Paul reveals its falseness by speaking immediately of God's judgment on the empire. He identifies it with darkness and night and goes on to speak of God's wrath (1 Thess. 5:1-10). John's Gospel similarly speaks in the singular of "the ruler of this age." This ruler "will be driven out" (12:31), "is coming" (14:30), and "has been condemned" (16:11). Conventionally this ruler has been understood to be the devil. But several factors suggest it also refers to the whole of the Jerusalem and Roman ruling elite allied as agents of the devil. (1) The same word, "ruler," refers to the Jerusalem leaders (3:1; 7:36, 48; 12:42); (2) the Gospel identifies these leaders as children of the devil (8:44-47); (3) the reference to the ruler who is "coming" (14:30) seems to indicate in the narrative Jesus' impending meeting with both the Jerusalem leaders and Pilate, the Roman governor (18:1-19:25); and (4) the Gospel recognizes that the Jerusalem leaders and Pilate are allies in representing and upholding Rome's order. The Jerusalem leaders claim, "We have no king but the emperor," in solidarity with the Roman governor Pilate who crucifies Jesus for threatening Rome's order flohn 19:15; cf. 11:48-53). Jesus' statement in 16:11, then, that the ruler of this age has been condemned, articulates God's judgment on the devil and on the Roman order that manifests the devil's power and purposes. The Gospels of Mark, Matthew, and Luke also use the "two age" scheme to announce judgment on Rome's world. They contain eschatological sections (Mark 13; Matt. 24-25; Luke 21), which describe signs that precede God's judgment on the present world and the future coming or return of Jesus to effect that judgment. Matthew ironically calls Jesus' coming the parousia (Matt. 24:3, 27, 37, 39), a term that commonly denotes the arrival of an emperor or military commander in a town. But instead of referring to an assertion of Rome's sovereignty, Matthew uses the term to assert God's rule in Jesus' coming. Matthew 24:27-31 presents this coming as a battle. Using "eagles," and not the mistaken translation "vultures," verse 28 ("Wherever the corpse is, there the eagles will gather") depicts the destroyed Roman army, represented by the symbol of the eagle that was carried into battle and was protected at all cost, as a corpse. God's judgment enacted by Jesus condemns and ends Rome's empire. 19 cal parties, to sign petitions, or to lead mass reform movements. And the elite was certainly not going to surrender its power and wealth voluntarily. Hence New Testament texts often urge readers to form alternative communities with practices that provide lifegiving alternatives to the empire's ways. Paul, for example, urges the churches in Rome, in a city full of displays of the elite's power and privileges: "Do not be conformed to this world, but be transformed by the renewing of your minds, so that you may discern what is the will of God" (Rom. 12:2). Renewed minds involve understandings of G o d ' s verdict on Rome's world, as well as of God's purposes for a different world. Paul instructs them to form communities of mutual support, to love one another, to not be haughty with one another, and to feed rather than avenge their enemies (chap. 12). Clearly these practices differ vastly from the indebtedness and dependency of patronclient relations, from the empire's hierarchy and domination, and the execution of military retaliation. They create a very different societal experience and very different ways of being human. Likewise, Paul affirms that the communities of believers in Rome have significantly different roles for women. In contrast to the patriarchal structure of the empire, which presented the emperor as "Father of the Fatherland" and head of a large household, but consistent with evidence that some women took prominent civic roles, Paul recognizes significant leadership roles for women. He describes women such as Phoebe (Rom. 16:1-2), Prisca (16:3), Mary (16:6), Junia (16:7), Tryphaena and Tryphosa (16:12), Rufus's mother (16:13), and other women with language that also describes his own ministry of preaching, teaching, pastoral care, and church planting. That is, his language recognizes the legitimate and significant ministries of these women. Paul also gathers a collection among his Gentile communities to alleviate the suffering of believers in Jerusalem (1 Cor. 16:1-4; 2 Cor. 8 - 9 ; Rom. 15:25-28). Four contrasts with Rome's taxing practices are immediately evident in Paul's collection: (1) the flow of resources from Macedonia and Achaia to Judea counters the flow of resources from the provinces to Rome; (2) the collection is a willing contribution rather than coerced taxation; (3) it is not given by nonelites to support extravagant lifestyles; and (4) the intent is to relieve suffering rather than cause it. 21 Matthew's Jesus similarly urges believers to form alternative communities with alternative practices. While "the rulers of the Gentiles lord it over" others, Jesus declares, "it will not be so among you." Instead of domination and tyranny, followers of Jesus are to live as slaves who seek the good of the other (20:24-28). John's Gospel similarly urges communal practices of mutual service (John 13:14, 34-35). James counters the culturally imitative practice of favoring the wealthy at the expense of the poor by encouraging the opposite practice. God's favor elevates the poor (James 2:1-7). Numerous texts urge followers of Jesus to demonstrate practical mercy in alleviating the terrible misery of Rome's world. First John 3:16-17 condemns those who claim to know God's love, who have some resources ("the world's goods"), but refuse help for someone in need. Paul (Rom. 12:19-21), Matthew (6:1-4; 25:31-46), and Acts (11:27-30) urge similar acts of practical mercy. None of these practices can bring down the massive inequities of Rome's system, but they provide an alternative to elite conspicuous consumption and help its victims survive more adequately. First Peter 2:17 may go further. Christians are to accept the authority of emperors and governors and are to "honor the emperor" (2:13-17). This honoring is part of a general strategy of good conduct and social cooperation that will help Christians regain a "good name." Honoring the emperor includes loyalty to the empire in every way (2:12; 3:16) except, as it is commonly interpreted, participating in prayers and sacrifices for the emperor. But this exception may not be so clear. (1) The refusal to participate in sacrifices and prayers would seriously undermine the strategy of social cooperation that the rest of the letter urges as the way for believers to regain a good name; (2) it neglects the letter's emphasis on inner commitment to Christ in one's heart (3:15); (3) and it overlooks the fact that all Christians did not abstain from involvement with idols (1 Cor. 8-10; Acts 15:29; Rev. 2-3). The third-century Christian writer Origen recognizes that some Christians offer sacrifices as a convenient social custom but not as genuine devotion. The possibility exists, then, that 1 Peter is encouraging Christian participation in honoring the emperor (including participating in sacrifices) as a socially convenient activity while recognizing that their real commitment is to Christ. Honoring Christ in their hearts (3:15) renders the external, socially compliant actions of sacrifice harmless. Rather, their negotiation of Rome's world is more complex. Survival, engagement, and accommodation mix with protest, critique, alternative ways of being, and imagined violent judgment. Preachers like Paul move from place to place on roads constructed to move Roman troops, taxes, and trade. They preach in cities that exploit surrounding rural areas, consign people to great misery, and extend Roman control (see chapter 4). Opposition and accommodation coexist. Followers of Jesus know a hybrid existence that results from their participation in two worlds, that of Roman domination and the alternative community of followers of Jesus. At times this mix of opposition and pragmatic survival is a deliberate strategy, a pragmatic way of "getting by" in a context where democratic processes for change are not available and without selling one's soul completely. The mix, of course, runs the risk that the various elements will not be held in tension. Cooperation brings benefits and rewards that make survival easier. Accommodation can take over. But the mix also results from other dynamics that take effect among folk subordinated to oppressive powers. Commonly, dominated peoples do not violently confront their oppressor because they know that the latter usually wins. Rather, the dominated combine various nonviolent forms of protest with acts of accommodation. The latter often disguise acts of dissent that are self-protective, masked, and ambiguous. But such survivalprotest tactics encounter another dynamic. Whereas oppressed peoples resent their oppressors and imagine their destruction, they often come to imitate them. They resent the power that is being exerted over them, yet they recognize that being able to wield power is desirable. They long for what they resist. They resemble what they oppose. Imitation coexists with protest, accommodation, and survival. We will explore these dynamics further throughout this book, but a brief example can be noted here. As we saw in chapter 1, the Roman Empire was a legionary empire that depended on its military prowess and threat to maintain control. It is not surprising that people living in a context of military power and subordinated to its power should absorb this m i l i t a r y ethos and l a n g u a g e w h e t h e r they w a n t to or not. Accordingly, New Testament writings, written by people under Roman occupation, frequently employ military metaphors to describe aspects of Christian living! That is, writers borrow a perva24 sive way of thinking and acting in the dominating culture, a culture they often oppose, to express aspects of their alternative worldview and way of life. The way of the world, however, is so strong and pervasive that they cannot resist its influence even as they protest it and reapply the language to a different form of existence. We noted above Paul's certainty that the empire is under judgment and that followers of Jesus should form alternative communities. Yet Paul describes himself and coworker Epaphroditus as "soldiers" in God's service (Phil. 2:25). He sees his preaching as waging war, not a "worldly war" with "worldly weapons" but with "divine power to destroy strongholds" and to take "every thought captive to obey Christ" (2 Cor. 10:3-6). The writer of 1 and 2 Timothy summons Timothy to be "a good soldier of Christ Jesus" and to be unwavering in his commitment to serve God. "No soldier on service gets entangled in civilian pursuits, since his aim is to satisfy the one who enlisted him" (2 Tim. 2:3-4 RSV). He is to "wage the good warfare" (1 Tim. 1:18 RSV). Paul also pictures Christian existence as a battle. There are warring powers at work within Christians (Rom. 7:23). One of these powers is the flesh, which is "hostile to God; it does not submit to God's law" (Rom. 8:7). In Philippians 4:7, as part of exhorting their anxiety-free focus on God and practice of constant prayer, he assures them that the resulting peace will "garrison" or "keep the enemy (anxiety) out of" their hearts (author's trans.). Paul applies military metaphors extensively to the life of his churches. In 1 Corinthians 9:7, in arguing that church leaders like himself should be paid, he appeals to the fact that soldiers are paid. In 1 Corinthians 14:40 he uses a military image in urging worship that is "orderly." He uses a term that denotes proper battle order. He has argued earlier in the chapter that just as an indistinct trumpet gives uncertain signals for battle, so unintelligible speech (speaking in tongues without interpretation) is not helpful for Christian living (14:8). In 1 Corinthians 15:23 he again employs a military metaphor of troops in line for battle in referring to the "order" or "ranks" of believers who constitute the "army" of the returning Christ. In military style, the return of Jesus is signaled with blowing a trumpet (1 Cor. 15:52). In 1 Corinthians 16:15 he sees the household of Stephanas as having "lined themselves up" or "ordered themselves" for service. And in Galatians 6:15 he exhorts the Galatians to "keep in step" with the Spirit (author's trans.). 25 A soldier's armor frequently provides imagery of Christian living. Paul tells the church in Rome to put on the "armor of light" (Rom. 13:12). In 2 Corinthians 6:7 he tells his hearers to b e equipped with "the weapons of righteousness [or justice]." In Romans 1:16-17 he describes righteousness or justice as "the p o w e r of God for salvation." The i m a g e suggests b e l i e v e r s empowered by God living according to God's purposes to restore and heal the world. The writer of Ephesians develops the armor image at length in describing a battle not against "flesh and blood, but against the principalities, against the powers" (Eph. 6:10-17 RSV). The writer selectively highlights pieces of armor. The belt (6:14) represents integrity. The breastplate of righteousness/justice signifies protection for faithfully enacting G o d ' s p u r p o s e s , whereas the sandals or shoes suggest alertness and solid grounding in the faith (6:15). The shield denotes faith or confidence in God that protects against attacks with "flaming arrows" (16:16), as does the helmet of salvation and the sword or word of God (6:17-18). This defensive pose is reflected in another military metaphor in 1 Timothy 5:14, where the writer is concerned about the conduct of women believers. His reason (and surely it is a man writing!) for having younger widows marry is that domesticating them will not "give the enemy an occasion" or "base of operations" or "beachhead" from which to launch further attacks on believers. These examples indicate the complex interaction among factors of survival, protest, accommodation, and imitation. That which exerts Roman power is imitated in texts that encourage both living with it and opposing it. But the military language is used without comment, suggesting it is deeply ingrained in these writers who live in the midst of Roman power. ConclusionI have noted five ways in which New Testament texts negotiate this world. Some texts view it as being of the devil and under God's judgment. Some offer visions of transformation and shape alternative communities with alternative practices. Others urge submission, prayer, and honoring behavior. Followers of Jesus employ various strategiessurvival, accommodation, protest, dissent, imitationin negotiating Rome's world. 26 CHAPTER 3 n chapter 1,1 described the hierarchical structure of the Roman Empire, which benefited the ruling elite at the expense of the nonelite. In chapter 2 , 1 described five different ways in which New Testament texts evaluated Rome's world: 1. Under the devil's control; 2. Under God's judgment; 3. Needing transformation; 4. Shaping alternative communities and alternative practices; 5. To be submitted to and honored. I sketched some of the strategies that New Testament writers employ in negotiating Rome's world. These diverse evaluations and strategies of Rome's world need further exploration. How did followers of Jesus negotiate the various realities of Rome's empire in their daily lives? Even if one considers the empire to be under the devil's control or subject to God's judgment, one still has to live in it each day. How did followers of Jesus engage the empire's means of control on a daily basis? Some of these questions are unanswerable, or, at best, only partially answerable. Our sources, the New Testament texts, do not address some of these issues directly. And sometimes when they 27 do, they outline what they want Christians to do rather than telling us what Christians were actually doing. We saw this issue at the end of chapter 1 in 1 Peter's instruction to honor the emperor. Is the instruction necessary because people were not honoring the emperor appropriately, or does the instruction reinforce what they were already doing? In this chapter, I will focus on the interactions between followers of Jesus and the first means of control identified above, the obvious faces and representatives of the imperial system (political office). In chapter 1,1 emphasized that Rome's empire constitutes the world in which the New Testament writings come into being and comprise the world in which the early Christians lived their daily lives. But while the empire was pervasive, its presence was made especially visible by its rulers. How do New Testament texts portray interaction with representatives of the empire? How do these rulers relate to God's purposes? How are followers of Jesus to negotiate the empire's ruling authorities, emperors, client kings, governors, and soldiers? Again, we will notice considerable diversity in perspectives. We will begin at the top with the emperor. EmperorsOf the ruling authorities, the emperor in Rome was supreme. The emperor journeyed to various parts of the empire, but his face was better known from coins and statues. The New Testament writers make numerous references to emperors. I will discuss seven examples. First, as we have noticed, 1 Peter 2:13-17 urges submission to honoring the emperor and 1 Timothy 2:1-2 urges prayer for the emperor. Second, Jesus' difficult instruction to "pay back to Caesar the things of Caesar and to God the things of God" (Mark 12:13-17; Matt. 22:17-22; Luke 20:21-26) indicates a much more ambivalent relationship. Those who question Jesus are powerful elite allies of the Jerusalem leadership allied with Rome. Angered by Jesus' attack on the temple, their power base, they want to trap Jesus so as to kill him. Paying taxes expressed submission to Rome's and the elite's sovereignty while nonpayment was regarded as rebellion. Given this power differential, hostile intent, and imperial context, Jesus answers in a way that subordinated people often do (see further, 28 chapter 8, below). He cleverly combines loyalty and deference with his own subversive agenda. He employs ambiguous, coded, and self-protective speech to uphold payment of a coin bearing the emperor's image, while also asserting overriding loyalty to God. He balances apparent compliance with hidden resistance. Jesus jousts with them. He does not have a coin so he asks them for one, forcing them to admit that they do not observe the prohibition against images in the Ten Commandments (Exod. 20:1-6)! Then when they have provided the coin, he irreverently asks a remarkable question about the most powerful person on the planet, "Who is this guy?!" Then he gives a very ambiguous answer: "Pay back to Caesar the things of Caesar and to God the things of God." "The things of God" embrace everything since God is creator; "the earth is the LORD'S and all that is in it" (Ps. 24:1). The "unofficial transcript" says the earth does not belong to Caesar despite Rome's claims of ownership (the "official transcript") that the tax represented. Nothing belongs to Caesar. But instead of saying, "Pay Caesar nothing," he orders payment to Caesar of Caesar's things. In context, that means the coin and the tax. But the verb translated "give" or "render" literally means "give back." Jesus instructs followers to pay the tax as an act of "giving back" to Caesar. Followers "give back" to Caesar a blasphemous coin that, contrary to God's will, bears an image. Paying the tax is literally a way of removing this illicit coin from Judea. As far as Rome is concerned, the act of paying looks like compliance. But Jesus' instruction reframes the act for his followers. "Giving back" to Caesar becomes a disguised, dignity-restoring act of resistance that recognizes God's all-encompassing claim. Third, two emperors figure in Luke 2 - 3 . Jesus' birth occurs when the emperor Augustus, who ruled from 27 B C E - 1 4 CE decrees a census (Luke 2:1-3). Whether such a census occurred at the time Luke claims is debatable. But the reference is crucial for framing the story of Jesus' birth. Interpreters have claimed that the reference to the census in 2:1-3 shows the empire and God cooperating to get Joseph and Mary to Bethlehem for the birth. But it is a matter of contrast and critique, not cooperation. The census is an instrument of imperial rule and domination. Empires count people so that they can tax them to sustain the elite's exploitative lifestyle. Jesus' birth in Bethlehem, "the city of David," the city 29 from which David's family originates and in which he is anointed king (1 Sam. 16; Luke 2:4, 10-11), evokes traditions about Israel's king who is to represent God's justice, especially among and on behalf of the poor and oppressed (see Ps. 72). Jesus' birth as a Davidic king at the time of Rome's census recalls God's purposes that are contrary to Rome's and threaten to transform Rome's world. In Luke 1, Mary had celebrated God's purposes in countering elite power. God brings "down the powerful from their thrones," fills "the hungry with good things," and sends "the rich away e m p t y " (Luke 1:52-53). How God does this politically, socially, and economically transformative work is not stipulated, but the text makes no mention of violence or military intervention. Fourth, Luke 3:1 contextualizes the ministry of John the Baptist in the reign of Tiberius Caesar, who ruled from 14 to 37 CE. In the midst of the power of Rome and its ruling provincial allies Herod, Philip, and Lysanias (Luke 3:l-2a)God intervenes to commission an agent of God's purposes. "The word of God came to John" (3:2b). John ministers in the wilderness and around the Jordan, places associated with God's deliverance of the people from tyranny in Egypt. John brings a message of change (repentance) and a sign of cleansing and new beginning (baptism) in anticipation of "the salvation of G o d " (3:6). Among those who come for baptism are agents of the empire, notably tax collectors and soldiers (3:12-14). Repentance for them does not mean ceasing their occupations and withdrawing from the empire. Rather they are to continue their occupations but conduct them with justice. They are to stop collecting excessive taxes and extorting money. Fifth, the alliance between the emperor and provincial elites, namely the Jerusalem leaders, is demonstrated in John 18 and 19. Unlike the scenes in Mark and Matthew where Pilate manipulates the crowds, John shows him manipulating his allies, the Jerusalem leaders. Throughout the scene, there is an ongoing sparring match in which both parties score points as they negotiate each other's power. But in the end Pilate wins the contest, making them beg to remove Jesus, thereby constantly reminding them that he has the ultimate power. Pilate has joined with his Jerusalem allies to send troops to arrest Jesus, so he clearly understands Jesus to be a threat who needs to be removed (John 18:3). But when the Jerusalem leaders 30 _ _ bring Jesus to Pilate for execution, he pretends not to know what Jesus has done wrong. He ignores the word "criminal" and tells them to deal with him themselves (18:29-31fl). He knows as well as anyone that this is not possible but his apparent dismissal of their concerns goads them into an important statement of dependence on him. "We are not permitted to put anyone to death" (18:31b). This admission of the need for his help is music to Pilate's ears, so he taunts them further. He claims to find "no case" against Jesus (18:38). Again, he is playing games. Jesus has just made treasonous statements to Pilate that he (Jesus) has an empire, and he has not denied being a king (18:33-37). Jesus' statement that his kingdom or empire is "not from this world" (18:36) does not mean that his kingdom has nothing to do with politics or worldly matters. Rather he means that his identity and mission come from God and involve revealing God's faithful purposes or reign for life on planet earth. Pilate knows that any talk of other kingdoms or empires and kings is dangerous. Pilate has Jesus flogged (19:1). This action renders his next statement in 19:4, that he finds no case against Jesus, hardly convincing. He is playing games. His games continue when he again tells them to deal with Jesus themselves (19:6). Their response again expresses their reliance on Pilate's help in acting against Jesus (19:7). In the meantime, Pilate asks Jesus, "Where are you from?" (19:8-12). Understanding Jesus' origin from God is a crucial recognition of his authority, revelation, and identity in John's Gospel. Jesus' nonresponse (19:9) brings a reminder from Pilate that he has the power to release or crucify Jesus. In turn, Jesus informs the noncomprehending Pilate that he has no power over Jesus except that which is given by God (19:11). Again Pilate taunts his allies by threatening to release Jesus (19:12). But this time the Jerusalem leaders call Pilate's bluff. "If you release this man, you are no friend of the emperor. Everyone who claims to be a king sets himself against the emperor" (19:12). He cannot release one who is a kingly pretender and is therefore an enemy of the emperor. To release King Jesus would be to fail the emperor badly. Their words signal the end of the game playing. They have called Pilate to task. His job as governor is to protect the emperor's interests. They challenge him to do what he is supposed to be doing. 31 But he gets his revenge. He asks a question to which he knows the answer but he frames it with a jab ("your King") to elicit another response of dependence from them. "Shall I crucify your King?" (19:15). Their response of begging Pilate to execute Jesus exceeds Pilate's wildest dreams. They shout, "We have no king but the emperor" (19:15b). It is a stunning statement. With these words these elite leaders renounce their covenant loyalty to God as Israel's king and express their opposition to God's purposes revealed in Jesus. They give their total allegiance to the emperor. Pilate the governor has done a remarkable job of securing the emperor's interests with this confession of the Jerusalem elite's loyalty! And simultaneously he gets to execute a threat to Rome's order. In this scene Pilate walks a fine line between working with his allies to remove Jesus, and taunting and subjugating them. He respects their custom not to enter his headquarters at Passover, but he elicits from them statements of their dependence on him and of their loyalty to the emperor. The scene shows John's audience that the empire is not committed to God's purposes. It cannot recognize God's agent and does not receive his revelation of God's purposes. Allegiance to the emperor competes with allegiance to Jesus. The empire is dangerous. Sixth, the New Testament texts, as do numerous Jewish texts, commonly refer to God as "Father." John's Gospel does so some 120 times. This title often denoted the god Zeus or Jupiter, so its use for the God of Israel and of Jesus differentiates God from the patron god of the Roman Empire. Moreover, since the rule of A u g u s t u s (died 14 C E ) , "father" identified the e m p e r o r as Jupiter's agent and the embodiment of Jupiter's rule. He was called pater patriae, "Father of the Fatherland" or "Father of the Country" (e.g., Acts of Augustus 35; Suetonius, Vespasian 12). This title not only combined religion and politics, but it also depicted the empire as a large household over which the emperor, like a household's father, exercised authority and protection in return for obedience and submissive devotion. Paul contests and redirects such claims of sovereignty and demands for loyalty. "Yet for us there is one God, the Father, from w h o m are all t h i n g s and for w h o m we e x i s t " (1 Cor. 8 : 6 ) . Matthew's Gospel takes a similar approach. The Lord's Prayer begins by addressing God as "Our Father in heaven" (Matt. 6:9). 32 The "our" signifies the community's different allegiance. The subsequent petitions for God's rule to be established and will to be done on earth and heaven ascribe total sovereignty not to Jupiter and the Roman emperor but to the God of Jesus. Later, Matthew's Jesus undermines allegiance to the emperor as "Father of the Fatherland" by instructing, "Call no one your father on earth, for you have one Fatherthe one in heaven" (Matt. 23:9). Seventh, Acts has often been understood to offer a positive view of the emperor. Arrested, Paul denies any wrongdoing against the law, temple, or the emperor (Acts 25:8). Governors Felix and Festus do not release him. So Paul exercises his right as a Roman citizen to have his case heard before the emperor (25:10-12, 21; 26:32; 27:24; 28:19). The emperor is presented as the representative of justice, the faithful defender of what is right and wrong. Paul appears as the model citizen, awaiting vindication from the just emperor, placing his faith in the empire and the emperor as its representative. The scenes seem to uphold the imperial judicial system. But these appearances of justice and the apparent rewards of submission to the imperial system are profoundly undercut. When Acts was written in the 80s or 90s, readers knew an important piece of information. Paul was dead, probably beheaded in Rome by the emperor Nero! His appeal to and trust in the emperor had been misplaced. His submission to the emperor had shown the emperor not to be trustworthy. His appeal resulted in death. The empire did not faithfully represent justice as its propaganda claimed. It was not to be trusted. It opposed God's purposes and endangered God's agents. KingsThe New Testament texts, including Jesus' parables and sayings, recognize that kings, along with the rest of the elite, exercised great power (Luke 22:25), waged war (Luke 14:31), enjoyed high status (Matt. 11:8), and possessed considerable wealth (Matt. 18:23-35). The parable of the unforgiving servant in Matthew 18 employs the scenario of a king who collects tribute of ten thousand talents (the amount Rome levied from Judea in the 60s BCE). He retains various clients and slaves skilled in financial matters to collect and administer the tribute, and he punishes those who do not extend 33 to others the favors received from him (Matt. 18:23-35). In the parable of the wedding banquet, the king invites members of the elite to celebrate his son's wedding but they insult him by not attending (Matt. 22:1-14). The king punishes them, attacking and burning their city. This is a likely reference to Rome's destruction of Jerusalem in 70 CE in which the city was attacked after a siege and burned. In an act of patronage, the king then invites the nonelite to the feast. Disturbingly, God the king (Matt. 5:35) is said to imitate these kings in dealing with disloyal humans (Matt. 18:35; 22:11-14). Along with imitation, there is also contrast and opposition. As we have noted, Rome ruled through alliances with provincial elites. One form of these alliances involved client kings whom Rome established to rule certain territory in return for loyalty to Rome. New Testament texts refer to several client kings from the Herodian dynasty who ruled Judea and Galilee. As those who enforced and benefited from the unjust and oppressive societal order, these kings often appear in the New Testament texts as murderously resistant to God's purposes manifested in Jesus and through his followers like the disciples and Paul. Followers are warned of the lengths to which kings (including emperors) will go to protect their privileges against challengers (2 Cor. 11:32; Matt. 10:18; Luke 21:12). tGrandson Herod "the Great" (37-4 BCE) appears in Matthew 2. He is the king who opposes God's purposes by trying to kill the newborn Jesus who has been commissioned to manifest God's saving purposes (1:21-23). The magi's announcement of a newborn "king of the Jews" threatens Herod since he is Rome's appointed "king of the Jews" (2:1-3; Josephus, Ant. 16.11). Matthew 2 reveals Herod's various attempts to defend his power, cataloging his elite strategies of allies, spies, lies, and murder that maintain oppressive structures: He uses alliances to gain information from the local Jerusalem leaders (2:4-6); He exerts his power to recruit the magi as spies or secret agents, sending them to gain intelligence about Jesus' birthplace (2:7-&i); He engages in spin, deceiving the magi by lying about his motives (2:8b); He employs violence after learning that the magi deceived him. He orders soldiers to kill the innocent and defenseless male infants in the region of Bethlehem so as to remove any threat (2:16). In addition to these actions, echoes of various versions of the E x o d u s story of P h a r a o h ' s o p p o s i t i o n to M o s e s e m p h a s i z e Herod's sustained opposition to God's purposes. Herodlike Pharaohkills infants to protect his enslaving power. Jesuslike Mosesescapes his murderous efforts. Jesuslike Moseshas a mission to deliver the people. God thwarts Pharaoh's and Herod's plans to kill God's agent of salvation. Instead Matthew 2 highlights Herod's death, referring to it three times (2:19,20,22). Matthew 2:22 mentions Herod's son Archelaus briefly but negatively. After his father Herod's death, Archelaus rules Judea "in place of his father Herod." That is, Archelaus continues Herod's harsh rule and opposition to God's purposes being worked out in Jesus. Because of this continuation, Joseph, faithful to his task to 35 protect Jesus as God's anointed agent, fears to go to Judea and settles in Galilee (2:22-23). Herod's other son, Herod Antipas, continues the family's opposition to God's purposes in Galilee (Mark 6:14-28; Matt. 14:1-12; Luke 9:7-9). He has John the Baptist, God's prophet who prepares people for Jesus, arrested and beheaded after John criticizes Herod Antipas for marrying his brother Philip's wife Herodias contrary to the Torah (Lev. 18:16; 20:21). Antipas reappears nears the end of Luke's Gospel where he questions and ridicules Jesus (Luke 23:6-12). The opposition to God's purposes continues in Herod's grandson and n e p h e w of Herod A n t i p a s , A g r i p p a I, the R o m a n appointed king of Judea (41-44 CE). He attacks the Jerusalem church, killing James and arresting Peter to please the elite (Acts 12:1-5). But God delivers Peter from prison (12:6-19) and punishes Agrippa. The people of Tyre and Sidon petition Agrippa for favor and food, flattering him as a god. Herod receives the acclamation. God strikes him down, has worms eat him, and kills him (12:1823). That is, God punishes this misplaced receiving of worship. The implications for an empire in which emperor worship was spreading are clear. Agrippa I's son, Agrippa II, appears near the end of Acts as a much less resistant figure. Friends with Festus, the Roman governor of Judea (Acts 25:13, 23), Agrippa II displays his power and status by being accompanied by the elite, "the military tribunes and prominent men of the city" (Acts 25:23). Agrippa II listens sympathetically to Paul's preaching. He declares Paul to deserve neither imprisonment nor death (Acts 25:13-26:32). Evident in the scenes involving (most of) the Herods is their opposition to God's purposes, defense of the status quo, and violence against God's agents. These characteristics belong to a larger biblical tradition of negative presentations of and suspicion about kings as opponents of God's purposes and agents (cf. Deut. 17:1417; 1 Sam. 8:9-18). This tradition is employed in Acts 4:26. The Jerusalem church prays after Peter and John have been arrested and released. Their prayer interprets the events in relation to this larger negative tradition about rulers. In words that echo Psalm 2, they pray, "The kings of the earth took their stand, and the rulers have gathered together against the Lord and against his Messiah." They go on to reference the action of Herod and Pilate against Jesus. The 36 phrase "the kings of the earth" commonly denotes rulers and nations opposed to God's purposes and people, as well as God's sovereignty over them (Pss. 76:11-12; 102:12-17; cf. Matt. 17:25). In contrast to these kings, Jesus is presented as a king who represents God's just and triumphant purposes. Jesus is agent of God's reign or kingdom or empire (Matt. 4:17). Matthew 21:5 quotes Zechariah 9:9, part of the vision of Zechariah 9-14 in which God establishes God's reign over the resistant nations. This role inevitably involves conflict with Rome's empire. We have seen the impact on Herod of the magi's question about the newborn "king of the Jews" (Matt. 2). The passion narratives emphasize Jesus' crucifixion as "king of the Jews" (Matt. 27:11, 29, 37, 42 ; John 19:3, 1 4 , 1 5 , 1 9 , 21). Since only Rome established client kings as allies to uphold the elite-dominated status quo, "everyone who claims to be a king sets himself against the emperor" (John 19:12). Jesus is crucified as a royal pretender, one who claims for himself an illegitimate title and threatens Rome's order. He is punished, as were others who claimed the title of king. The proclamation of Jesus' significance as the agent of God's rule also causes problems for early church leaders. After Paul and Silas preached in Thessalonica, Jason and other believers were dragged before the city authorities and charged with "acting contrary to the decrees of the emperor, saying that there is another king named Jesus" (Acts 17:7). The book of Revelation combines a number of these elements about kings in its disclosures about the nature of Rome's empire and of God's purposes (cf. 10:11): It introduces Jesus as "the ruler of the kings of the earth," asserting God's sovereignty over the power of the Roman emperor and its client kings (1:5). The "kings of the earth" (under Rome's influence, 17:18) along with "the magnates and the generals and the rich and the powerful"the elite of the empireare subject to G o d ' s wrath and condemnation (6:15-17). Their economic, commercial, and political dealings are condemned as unjust and demonic (17:1-6; 18:1-3, 9-13). Kings are agents of demonic spirits by resisting God's just purposes (16:12-14; 18:1-3). 37 Kings battle against God and are defeated (16:12-16; 19:17-19). God's rightful and just sovereignty over the kings and the earth prevails (15:3; 19:16). Kings pay homage to God (21:24). GovernorsProvincial governors comprised another face of imperial power. The senate or emperor appointed governors from the elite. In alliances with local elites, governors represented Rome's authority and interests. They exercised enormous power in keeping order (i.e., submission to Rome), collecting taxes, building public works, commanding troops, administering justice, imposing the death sentence on those who threatened Roman elite interests, and keeping local elites satisfied. No doubt some governors tried hard to fulfill their roles well, but the sources often attest governors to be self-serving and self-enriching in exercising harsh and exploitative rule. New Testament texts focus on encounters with governors in two arenasbetween Jesus and Pilate, and between Paul and several governors in Acts. Each of the Gospels narrates the exchange between Pilate and Jesus in different ways. We looked at John's account earlier in this chapter as Pilate and the Jerusalem elite struggle with one another and express loyalty to the emperor. Here we will look at Matthew's account, where the emphasis falls on Pilate and the crowds (Matt. 27:11-26). Pilate was governor of Judea from 26 to 37 CE. Often interpreters claim that the Gospels depict Pilate as weak and indecisive, intent on letting Jesus go but forced into executing him by the Jerusalem leaders. One interpreter goes so far as to claim that Pilate is politically neutral in Matthew's scene, which is devoid of any political pressures on him! But these claims make little sense given the strategic role, crucial responsibilities, and enormous power of governors. Pilate exercises life-and-death power. Having the power to put someone to death is not a neutral act. It is a very politically charged act. Pilate rules in alliance with the Jerusalem leaders. His task is to defend Rome's interests. He administers "justice" that protects Rome's elite interests against provincial troublemakers. He exercises this power, as we have seen from 38 John's account, with his Jerusalem allies even when the relationship is strained and contentious. These realities must shape our reading of Pilate's interaction with Jesus. When Pilate's Jerusalem allies bring him Jesus, charged with being "king of the Jews," for execution, Jesus' fate is sealed (Matt. 27:11-14). The narrative signals this in 27:3a. When the Jerusalem leaders hand Jesus over to Pilate, Judas "saw that Jesus was condemned." There has been no "trial," no meeting between Pilate and Jesus yet, but Judas knows that the elite work together to defend their interests. Two factors ensure Pilate will execute Jesus. One factor concerns keeping his allies happy by respecting their wishes. The second factor concerns the content of the charge. To claim to be "king of the Jews" without Rome's assent is to pose a political threat and to be guilty of treason. But to execute a kingly pretender is risky. Pilate knows that Jesus' crucifixion might provoke a violent uprising. Pilate needs to know how much support there is for Jesus. In verses 15 through 23 Pilate conducts a poll to assess the strength of support for Jesus. He knows that since his allies are jealous of or threatened by Jesus, Jesus must endanger the elite w a y of life (27:18). He offers a public bait and switch with Barabbas. His allies manipulate the crowd to shout for Barabbas's release as the price of Jesus' execution (27:20-21). Pilate tests their support by asking several times about Jesus. The crowd calls for his execution (27:22-23). This is a masterful piece of work by Pilate. Aided by his Jerusalem allies, Pilate polls the crowd and manipulates them into demanding what he already intended to do, thereby disguising his will as theirs (27:24-26). But the narrative is equally skillful in exposing Pilate's work. (1) Mrs. Pilate testifies that she has learned in a dream that Jesus is "righteous" or "just" (27:19). The word in Matthew's Gospel attests faithfulness to God's purposes. She ironically announces that Jesus' faithful challenge to Rome's way of structuring the world accounts for his death. (2) Pilate washes his hands of Jesus, blaming the people and having them take responsibility (27:2425). But the narrative's references to Pilate's questioning of Jesus as "king of the Jews," to the Jerusalem elite's manipulation of the crowd, and to Pilate's polling of the crowd reveal the self-serving nature of Pilate's and Rome's rule. It protects elite power and 39 privileges against a provincial threat. The narrative reveals that Roman justice is all washed up. Not surprisingly, Matthew's Gospel warns followers about the power of governors (and kings), but promises them aid from the Holy Spirit in defending themselves (10:16-18; cf. Luke 12:11-12). In discussing John's account of Pilate and Jesus, I noted above that in the tussle between Pilate and his Jerusalem allies, both parties elicit statements and actions of loyalty to the emperor from each other (19:12, 15). In a subsequent scene Pilate continues to taunt and subordinate these allies. He insists on putting a sign on Jesus' cross in three languages saying, "Jesus of Nazareth, the King of the Jews" (19:19). Such identification is intended to intimidate and compel compliance by reminding people of the futility of rebellion. The Jerusalem leaders want to modify the notice. They want to distance themselves from Jesus and from any notion of rebellion. So they propose it be edited to read, "This man said, T am king of the J e w s . ' " But Pilate refuses. He will not allow any such distancing. But ironically neither he nor his allies understand the truth of the proclamation that the notice makes. This scene underlines the message of the previous scene involving Pilate, Jesus, and the Jerusalem allies. The empire is not committed to God's purposes. It cannot recognize God's agent and does not receive his revelation of God's purposes. The empire is dangerous. Paul's encounters in Acts with four provincial governors are more mixed. The first involves Sergio Paulus, appointed governor of Cyprus by the senate (Acts 13:4-12). The governor becomes a believer when he witnesses Paul perform a miracle by blinding a court magician. The second concerns Gallio, governor of Achaia in Greece (Acts 18:12-17). Synagogue leaders, likely members of the local elite, accuse Paul before the governor of teaching contrary to the law. Gallio refuses to intervene in this intra-Jewish matter and dismisses them. Others then beat the synagogue leader Sosthenes in front of the governor, but he does nothing to protect the leader. His sanctioning of anti-Jewish violence is hardly a reassuring picture of a fair-minded governor ever vigilant for peace and justice for all. The third and fourth encounters involve governors of Judea, Felix (52-60 CE) and Festus (Acts 24-25). Exhibiting considerable interaction between these governors and their allies, the Jerusalem leaders, the scenes demonstrate rule that is more interested in 40 pleasing elite allies than securing justice. Felix hears elite accusations against Paul, keeps him in custody, hears him preach, but does not release him (chap. 24), partly so as to not alienate the Jerusalem elite. Verse 24 indicates that Felix hopes to receive a bribe from Paul for his release, but in its absence Paul remains in prison. Governor Festus (60-62) replaces Felix. Festus shows himself biased toward the Jerusalem leaders, his elite allies, so Paul appeals for his case to be heard in Rome (25:9-12). Festus agrees even though he and Agrippa II cannot find any guilt (25:25; 26:3032). Neither risks alienating the Jerusalem leaders by intervening to free Paul. The governors in Acts do not actively pursue believers. When they encounter Paul, their responses range from welcoming him (Sergio), to disinterest (Gallio), to inaction (Felix and Festus). Whereas Gallio is quite dismissive of synagogue leaders, Felix and Festus seem more interested in not offending their Jerusalem elite allies than in enacting justice. We will consider the roles of urban elites further in chapter 4. SoldiersWhereas most folk had little direct encounter with emperors and governors (though of course they lived daily with the consequences of their rule), soldiers probably provided the most visible face of Rome's power for local residents. For example, three to four legions, or about fifteen to twenty thousand troops, were stationed in the important Christian center, the city of Antioch in Syria, with an estimated population of about one hundred and fifty thousand. References to military action (Matt. 22:7; Luke 21:10, 20), groups of soldiers Qohn 18:3; Phil. 1:13), the Praetorium guard in Rome (Acts 28:16, 30; Phil. 1:13; Ephesus), and soldiers with various duties (cavalrymen, Acts 23:32; Rev. 9:16; spearmen, Acts 23:23) and of diverse ranks pervade the New Testament texts depicting them as enforcers of Rome's order. We noted in chapter 2 that the name of the demon in Mark 5:120 is "Legion" (5:9,15; Luke 8:30). This is the name of the key unit in the Roman army comprising some six thousand soldiers. Interestingly, the heavenly angels are divided into legions. When 41 Jesus is arrested, he forbids his followers to use any violence. He indicates that he could ask God to send "twelve legions of angels" (72,000!) but he will not ask (Matt. 26:53). Other military units include a speira, comprising up to six hundred soldiers (Cornelius, Acts 10:1), a guard unit (at the tomb, Matt. 27:65), a small detachment (Herod and soldiers mock Jesus, Luke 23:11), and a squad of four soldiers (four such squads guard Peter, Acts 12:4). Military personnel of various ranks appear. Tribunes were often of elite status. In Mark 6:21 they are among the guests at Herod's b i r t h d a y party. T h e y usually n u m b e r e d six per legion and commanded one thousand men. One commands a speira that arrests Jesus and takes him to the chief priests (John 18:3, 12). Tribunes are especially prominent in Acts 21 through 24 (see 21:3133, 37; 22:24-29; 23:10, 15-22; 24:22). They maintain order, arrest and guard Paul, and cooperate with the Jerusalem elite and the Roman governor. Acts 23:23 identifies the chain of command, "Then [the tribune] summoned two of the centurions and said, 'Get ready to leave... with two hundred soldiers.'" Centurions numbered about sixty per legion, commanding eighty to one hundred men. They had considerable military experience and often some social status. They appear in four contexts. (1) A centurion in Capernaum, recognizing Jesus' authority, seeks Jesus' healing power. Luke's account notes his alliance with and patronage of the local Capernaum elite (Matt. 8:5-13; Luke 7:1-10). (2) At the cross, a centurion discerns Jesus' identity as "Son of God" (Matt. 27:54; Mark 15:39) and "righteous" (Luke 23:47). The centurion attributes to Jesus terms usually associated with the emperor. (3) Acts 10 and 11 narrates the conversion of Cornelius, "centurion of the Italian Cohort" (ten cohorts per legion) as a demonstration of God's graciousness to Jew and to Gentile (Acts 11:12, 17-18); (4) Centurions are prominent in the arrest (Acts 21:32), attempted whipping (22:22-29), custody (23:16-24), and escort of Paul to Rome (Acts 27). Public order, military power, local alliances, and Roman justice are clearly interconnected. Soldiers of ordinary rank also enact Rome's power. "Soldiers of the governor" mock, torture, and crucify Jesus (Matt. 27:27; Mark 15:16-20; Luke 23:36; John 19:2, 23, 25, 32, 34). But in raising Jesus, God mocks Rome's military power, turning the soldiers appointed to keep Jesus dead as "dead men." They cannot guard against 42 God's life-giving power no matter what their bribed testimony (Matt. 28:1-15). Similarly, soldiers act against Christian leaders. God's power thwarts the soldiers who guard the arrested and imprisoned Peter (Acts 12:3-19). Herod Agrippa attempts to restore order by killing them. Soldiers, including cavalry and spearmen (Acts 23:23), arrest, escort, and guard the imprisoned Paul from Jerusalem to Caesarea to Rome (Acts 21-28). Some soldiers seek guidance from John the Baptist about their conduct in the army. By not urging soldiers to abandon their weapons, John the Baptist accepts troops as inevitable. John instructs soldiers to use their power moderately without "threats or false accusation" (Luke 3:14). Rome's military power is pervasive, but followers of Jesus are forbidden to use violence against it. Matthew's Jesus instructs disciples to disrupt the use of military power as a weapon of imperial control by combining extravagant compliance with nonviolent protests that undermine military authority (Matt. 5:41; see chapter 8's discussion of some forms of protest). Jesus rebukes a disciple who uses violence to counter Jesus' arrest. Jesus could ask God for "twelve legions of angels" to fight his arrest but this is not God's way (Matt. 26:53). John's Jesus reminds Pilate that his followers could fight his arrest with violence but will not do so because Jesus' kingdom represents God's purposes among humans ("not from this world," John 18:36). But Christians engage in their own warfare. Given the pervasiveness of the Roman military, it is not surprising that New Testament texts use military images to depict Christian living, as I noted at the close of the last chapter. 43 CHAPTER 4 ome controlled a vast amount of territoryfrom England in the north, across Europe to Judea and Syria in the east, and through Spain and across northern Africa to the south. About 60 to 70 million people lived within this territory, with perhaps 5 to 7 percent living in cities. Jesus' ministry engaged small t o w n s and v i l l a g e s in G a l i l e e such as N a z a r e t h , C a i n , and Capernaum. The Gospels do not mention the significant urban centers of Sepphoris and Tiberias. Jesus travels to the areas surrounding the cities of Tyre and Sidon in Syria (Mark 7:24-30) and south to Jerusalem (Mark 10-11). As the movement spreads, followers appear in cities such as A n t i o c h , E p h e s u s , S m y r n a , Pergamum, Thyatira, Sardis, Philadelphia, Laodicea, Philippi, Thessalonica, Corinth, and Rome. What role did cities play in R o m e ' s empire, and how did Christians negotiate urban imperial life? These questions have a further complication. Urban life had significant impact on and interaction with the surrounding countryside and their networks of small villages. Christian writings such as Philippians, Ephesians, or Revelation, associated with the urban areas named above, also addressed Christians living in surrounding rural areas and villages. Other writings such as 1 Peter were addressed to Christians in the widespread areas of five 44 Spaces of Empire provinces that included both urban and rural inhabitants. Some scholars have suggested that Mark's Gospel originated in Galilee and addressed its largely rural village life (see 6:6,35, 56). This arrangement of a city surrounded by dependent villages is reflected in the description of Jesus' visits to "the villages of [the city of] Caesarea Philippi," (Mark 8:27; cf. Luke 24:13) and to the area surrounding the cities of Tyre and Sidon (Mark 7:24-31; Matt. 15:21-22). King Herod kills the baby boys in Bethlehem and its s u r r o u n d i n g area (Matt. 2:16). J e r u s a l e m ' s control extends throughout Judea and Galilee. The temple-based ruling elite in Jerusalem send representatives to Bethany, between Jerusalem and Galilee, to investigate John the Baptist (John 1:19-28). Pharisees and scribes from the same elite group travel to Galilee (perhaps Gennesaret or the surrounding area, Matt. 14:34) to question Jesus (Matt. 15:1). Our questions, then, need to be expanded: What role did cities and the c o u n t r y s i d e p l a y in R o m e ' s e m p i r e , and h o w did Christians negotiate urban-rural life in Rome's empire? GospelsOne System: Urban and Rural AreasWhat was the relationship between urban and rural areas in Rome's world? In part each area depended on the other. The Roman Empire was an agrarian economy with land as the primary resource. Rural areas provided the food and other products required by cities. Cities consumed rural production, offered necessary skills, engaged in trade and commerce, and provided centers for imperial administration and security. This view, though, is too simple. It ignores the hierarchical and exploitative structures and dynamics of the empire outlined in chapter 1 above. The empire's urban and rural areas were deeply embedded in these sociopolitical structures. Their economic and social life reflected the inequalities discussed in chapter 1. Cities were centers of elite power and extended their political, societal, economic, and religious control over surrounding areas and villages. Throughout the empire, the small governing group of about 2 to 3 percent of the population, often urban based, controlled most 45 of the land and its production. They owned large estates worked by slaves. They collected rent, usually paid in kind, from peasants. They increased their holdings by foreclosing on defaulted loans. They traded surplus for needed resources and profit. They redistributed peasant production to cities, to their own larger estates and households, and to temples. One estimate suggests that 2 to 3 percent of the population consumed some 65 percent of production. Their economic control reflected their political power and exerted enormous influence on how most of the population lived. The Gospels frequently depict the centrality of land and agriculture under elite control. Jesus refers to practices of sowing (Matt. 13:3-9), to crop sabotage by an opponent who plants weeds (Matt. 13:24-30), and to harvest time (Mark 4:26-29). People squabble over i n h e r i t a n c e s (Luke 12:13-14) and i n d e b t e d n e s s ( L u k e 12:58-59). An elite person increases his landholdings, presumably outside the city and through agents or slaves and perhaps through default and foreclosure (Luke 14:18). Another has purchased five yoke of oxen, enough animal power for perhaps one hundred acres. If this is half of his arable land, his (minimally approximate) two hundred acres is very much larger than small peasant holdings of up to about six acres (Luke 14:19). Absentee landowners employ administratively skillful slaves to manage and increase their master's wealth (Matt. 24:45-51; Luke 12:36-48; 19:11-27). Some elite landowners keep their land and wealth in the family through inheritances (Luke 15:11-32). Some build bigger barns for their crops (Luke 12:16-21). They own hardworking slaves from whom even more is expected (Luke 17:7-10). They hire day laborers from a city or village marketplace to work in a vineyard (Matt. 20:1-16), or they rely on the labor of their sons (Matt. 21:28-32). Through their agents, they take violent and fatal action against tenants who themselves use violence to refuse handing over their rent payment in the form of a percentage of the yield (Matt. 21:33-46). The small ruling group exercised their self-benefiting control through a group (called retainers) who serviced elite interests. Soldiers kept order; priests ensured divine blessing on productivity; craftsmen provided various services; merchants traded production and procured necessary supplies; and village elders oversaw the tasks and negotiated with other villages and authori46 ties. The remaining 90 percent or so of the population did the actual physical working of the land. Clearly the losers in this vertical, hierarchical, and exploitative system were peasant farmers, urban artisans, and unskilled workers. Most peasants lived in small villages where households worked small areas of land that they owned or rented. Elites used rents and taxes to siphon off their production. Peasant labor, much despised by elites, sustained extravagant elite lifestyles. The elite's demands disrupted village patterns that cultivated communal emphases. Especially vulnerable was the practice of reciprocity whereby village households sustained one another through the fair and equal exchange of goods (see Luke 11:5-10). Likewise, village pressures that hindered anyone from getting ahead by accumulating more than others were countered by the necessity of looking out for the interests of one's own household. Vulnerability to forces outside their control, severe poverty, and powerlessness pervaded village existence. Peasants struggled to produce enough from small landholdings to feed extended households and animals, barter for other required goods, ensure seed for the next planting, pay taxes a n d / o r rents, and repay loans. Sometimes elites offered acts of patronage to alleviate the struggleforegoing a rent payment, lightening a tax demand, or financing a village festival or feast. However, such actions of apparent kindness and goodwill disguised the exploitative redistributive system. They placed villages in further debts of gratitude and dependence. The ruling group managed this urban-rural economic system to display, protect, and improve their own power, wealth, and status. Housing, clothing, transportation, food, education, manners, nonmanual work, and so forth were status markers. These markers emphasized the gap between "those with" and the vast majority of "those without." Cities were organized geographically to underline elite control and social hierarchy. Political, commercial, religious, social, and residential aspects of urban life were closely connected. Urban centers featured buildings of political-religious power such as administrative buildings, a forum, and temples. Other buildings such as theaters, stadia, temples, and markets (agora) provided gathering places and opportunities for elite control through 47 entertainments, rhetoric, religious observances, and trade. Elite housing tended to be surrounded by those who provided them with necessary services and who carried out their will. Those with no power or skills, having only their labor to sell, occupied the geographical margins of cities in cramped and unhygienic conditions. Often these people included peasants dispossessed of land through foreclosure or forced into the city because the small family landholding could no longer sustain the household. Multistoried buildings provided a vertical form of these power arrangements with the poorest and unskilled in top floors. This urban geography and these sociopolitical arrangements are evident, for example, in the parable of the great banquet (Luke 14:15-24). Having been rejected by three elite invitees, the host orders his slave into "the streets [or squares] and lanes of the town" to invite "the poor, the crippled, the blind, and the lame" (verse 21). When the city's nonelite poor does not fill his banquet hall, he sends the slave out again. The slave goes to those who are even more socially and geographically marginalthose outside the city walls on the highways and under the hedges. These are not peasants in rural villages but the dispossessed and beggars seeking to eke out a living. Their existence is so far removed from the host's elite urban world and they have been so damaged by it that they need to be coerced into entering it. We will further consider the impact of this elite system on the nonelite in chapter 7, below. Elite and nonelite interactions were limited, but were dominated by patron-client relationships. These relationships extended elite control through favor and dependence. They rewarded the compliance of craftsmen and laborers with the necessities of daily survival, namely opportunities for further work and small favors. Elites also extended patronage to civic groups. They practiced "euergetism" or "civic good works" such as sponsoring the regular meals and gatherings of guilds of artisans and workers (called collegia), burial associations, or other voluntary associations. They financed city entertainments, provided food handouts, and paid for buildings or fountains or statues in a city. These activities had great payoff. They increased the elite person's visibility, honor, and power. They obligated or indebted nonelites with dependence and gratefulness. They reinforced the hierarchical structure of submission and dependency. 48 The activities and control of the ruling group spanned both cities and countryside. Both arenas offered the empire's elite households numerous opportunities to maintain and enhance their sociopolitical status and power. Holding political offices within the elite-controlled city governance provided one significant outlet, as did patronage and displays of their wealth and power. Conspicuous consumption of resources, exhibited in an extravagant lifestyle, required the continual transfer of wealth to the elite. Elite households gained the cash necessary for this extravagant way of life from rents on land, loans to peasants and to artisans, investments in intercity trade and commerce, and inheritances, as well as rent from houses, apartments, shops, and warehouses. They exploited both urban centers and surrounding rural areas. They created "mini-economies" that interacted with other households and embraced urban-rural and urban-urban sociopolitical and economic interactions. The Gospels and Acts evidence the power and social control of urban-based elites. Herod invites to his birthday party "the great ones," military leaders, and "the first ones" or "leading men" of Galilee (Mark 6:21). Luke associates these "leading men" with chief priests and scribes in the Jerusalem temple (19:47). In Acts, the high-status women and the "leading m e n " of Antioch of Pisidia act against Paul and Barnabas to maintain civic order by expelling them from the city (Acts 13:48-51). In Acts 25:2, the chief priests and "the leading men" from Jerusalem visit the Roman governor Festus and speak against Paul. In verse 23, the leading men accompany King Herod Agrippa and his sister Bernice to the governor Festus in Caesarea to hear Paul. Other elite figures uphold Roman interests and societal order against those who threaten those interests and order. In Philippi, the local rulers and magistrates beat and imprison Paul and Silas for "advocating customs that are not lawful for us Romans to adopt or observe" (Acts 16:19-24). They later apologize when they learn that Paul and Silas are R o m a n citizens ( 1 6 : 3 5 - 4 0 ) . In Thessalonica, further social disorder breaks out and Paul and Silas are accused of "acting contrary to the decrees of the emperor, saying that there is another king named Jesus" (Acts 17:1-9). One man, Jason, is taken before the civic authorities (called politarchs). Further riots break out in E p h e s u s involving the temple of 49 Artemis, but the town clerk calms the riot and urges action through legal channels (19:23-41; see chapter 5, below, for discussion). In these situations, urban elites maintain civic order and protect their interests against any civic, economic, or political threat to their power. chapter 5, below). Fig trees with fruit conventionally signal God's blessing (Num. 20:5; Deut. 8:7-8). This fig tree, with leaves but no fruit, signifies the absence of God's life-giving blessing from this elite-controlled system that causes so much misery for most of the population. A withered fig tree depicts God's judgment on their system Qer. 8:13; 29:17). In Matthew's Gospel, this judgment is depicted in the parable of the king and the w e d d i n g b a n q u e t for his son (22:1-14). Matthew's version differs in some significant ways from Luke's parable of a man (not a king) who hosts a great dinner (not a wedding, Luke 14:15-24). The most significant response centers on the king's response when people refuse his invitations. Like the man in Luke's story, the king becomes angry (cf. Matt. 22:7 and Luke 14:21). But unlike Luke's man, Matthew's king sends troops, kills the people, burns their city, and then invites people from the street to come to the wedding (22:7). This action seems out of proportion to the offense, and destroys the sequence between verses 6 and 8. Fire often denotes judgment (Matt. 13:30, 40), and burning cities was a common way of punishing a defeated enemy. Rome burned Jerusalem after its defeat in 70 CE (Josephus, JW 6:249-408). Matthew writes into the parable judgment on the city's leaders for rejecting God's son, Jesus. Jesus criticizes the wealthy elite who store up their abundance for their own use while most of the population lacks food. The "rich m a n " who plans to build larger barns for his grain and goods is a fool because in ignoring God's just and life-giving purposes for all creation, he is not "rich toward God" (Luke 12:16-21). Jesus challenges a very rich ruler to sell what he owns and redistribute his money to the poor (Luke 18:18-27). Jesus warns people against the scribes (Luke 20:45-47). Scribes were members of the elite, with power based in education and training in the law. Their role was to interpret the traditions and apply them to everyday life, hence they had great influence. As allies with other elite groups such as chief priests (20:19), they used their power to enrich themselves at the expense of others. Jesus cites their ability to "devour widows' houses" for which "they will receive the greater condemnation" (20:47). These specific criticisms of elite exploitative power fall within a larger pattern that emphasizes God's judgment on Rome's world 51 and reversal of it at Jesus' return, and establishment of God's purposes. As I noted in chapter 2, the Gospels regard Rome's empire as devilish (Matt. 4:8; Luke 4:5-8) and declare it will be replaced in God's judgment by God's reign or empire (Matt. 24:27-31; Mark 13:24-27; Luke 19:11-27). That coming eschatological judgment includes very physical, material, and economic transformation. Abraham speaks to the rich man condemned in judgment: "During your lifetime you received your good things, and [the poor] Lazarus in like manner evil things; but now he is comforted here, and you are in agony" (Luke 16:25). Land will be restored to those who currently lack adequate resources. In the beatitude of Matthew 5:5, Jesus cites Psalm 37 to assure the powerless poor, the meek, that God will end the oppressive ways of the rich and powerful. Because of G o d ' s intervention, they will inherit the land or the earth. Similarly in Mark 10:29-30 Jesus promises, "There is no one who has left house or brothers or sisters or mother or father or children or fields [lands], for my sake and for the sake of the good news, who will not receive a hundredfold now in this agehouses, brothers and sisters, mothers and children, and fields, with persecutionsand in the age to come eternal life." The age to come in which God's purposes are enacted involves socioeconomic reorganization and reward for followers of Jesus. But interestingly, it is not only a matter of waiting for this eschatological judgment. In the passage from Mark, Jesus points to the hundredfold increase in household and land as already taking place in the present. A major strategy for engaging Rome's world in the present is the creation of households and social experience that offer an alternative to elite imperial practices. Instead of an emphasis on blood relations, Jesus focuses on "fictive" kinship among his followers and upholds the kinship values of peasant h o u s e h o l d s . L a b o r ( " b r o t h e r s , s i s t e r s . . . " ) and r e s o u r c e s ("houses... land") are not hoarded for one's own advantage but exist for the benefit of others. Jesus elaborates the same approach in Luke 6:30-36 in urging followers to meet the needs of others by sharing possessions and lending, even if repayment is not possible, thereby renouncing obligations of reciprocity so valued in elite interactions. Acts of mercy, prayer for God's transforming work, fasting, and continual focus on enacting God's empire and 52 its justice will mean that "all these things"what one eats, drinks, wearswill be provided by God both now and in the new age (Matt. 6:25-33). Instead of domination, service for the good of others is the way of life (20:24-28). Communal support and mutual service provide coping strategies until the final judgment. Paul's CommunitiesThe communities of believers founded by Paul must also negotiate their rural-urban contexts. Paul's letters to them offer some insight and guidance. ThessalonicaSince 146 BCE, Thessalonica, a city of perhaps forty to fifty t h o u s a n d , had b e e n the c a p i t a l of the R o m a n p r o v i n c e of Macedonia. The Roman governor resided there along with his support staff and garrison of soldiers. Some wealthy Romans were among the city's elite. Elected (elite) magistrates (called politarchs), a council, and a popular assembly governed the city. Ethnically the city probably comprised mostly Macedonians. Acts 17:1-9 suggests an active Jewish community, but there is no evidence of Jewish presence until the third century. Various gods such as Zeus and Apollo were worshiped along with mystery cults (Kabirus) and healing gods (Asclepius). A temple to the emperor Augustus was part of the active celebration of the imperial cult. First Thessalonians, written in the late 40s, gives some clues to the various ways Paul's converts negotiated their urban environment. Paul refers to supporting himself there by working (1 Thess. 2:9), and exhorts them to work with their hands "as we directed you" (4:11). The emphasis on work suggests the basis of his ministry was an artisan workshop (perhaps owned by Jason mentioned in Acts 17:1-9). In the workshop, more than on street corners or in the marketplace, he gave instruction to "each one" (2:11). The letter does not mention slaves. The exhortation to selfsufficiency in 4:12 would make little sense if slaves or freedmen, still obligated to their masters, were part of the believing community. The absence of references to wealthy patrons or to strife among socioeconomic or ethnic groups within the church suggests 53 a small group of mainly artisan converts (15 to 30?), perhaps meeting in an artisan workshop. The letter frequently mentions opposition to Paul's preaching (2:2) and the believers' suffering or persecution (1:6; 2:14; 3:3-4, 7). There was no empire-wide persecution of believers in the first century, so their suffering or distress originates in local conflicts. The reference to their "turning to God from idols" (1:9) probably accounts for the conflicts. Other peoplefamily members, fellow workers, friendsprobably saw their "turning" as dangerous because it risked reprisal on the city, on households, on business, and on personal survival from spurned and wrathful cosmic powers. Moreover, to renounce city gods and not participate in celebrations of the imperial cult were seen as acts of disloyalty and subversion. Paul's reference to their commitment to God's kingdom or empire would have sounded suspicious in suggesting that they follow another king or emperor (2:12). In 4:15 he refers to Jesus' return as a "parousia," a word that commonly designated the arrival of the emperor or imperial official or general in a city. The image subversively presents Jesus as the rightful ruler returning to claim sovereignty. In 5:3, by speaking of Jesus' coming as destroying Rome's world (see chapter 6, below), Paul mocks a common claim of Roman rule that it had established "peace and security." These subversive claims and the action of renouncing idols probably caused significant tensions and strained relationships between the group of Christ believers and others in the city. Perhaps things deteriorated when several believers died before Jesus' return (4:13-18), leading to doubts among the believers and mocking from outsiders that the believers' claims of salvation from death made no sense. Insults, broken relationships, economic isolation from people refusing to do business with them, verbal abuse, and perhaps threats of physical violence seem to comprise the conflict. Twice Paul suggests that Satan, invisibly at work behind the scenes, is responsible for the conflict (2:18; 3:5). The conflict may have involved more than spats within their neighborhoods. There are signs that it was civic and public, at least for some. For instance, in 2:14-16, Paul says they suffer from their "compatriots." He goes on to compare this suffering with actions of the Jerusalem elite in rejecting God's messengers, Jesus, 54 prophets, and Paul. The "compatriots" in Thessalonica, then, would be the city's ruling elite who are opposing the believers. How and why would the elite get involved in conflicts with a small group of artisan believers? In 5:15 Paul tells the believers not to repay evil for evil. The inclusion of this instruction suggests that some had done so. Whereas some seem to have become despondent in their faith (3:110; 5:14), other believers may have retaliated or escalated the conflict (5:15). How would they do so? In 5:14 Paul writes, "admonish the disorderly/disruptive" (author's translation). The usual translation for the last word is "the idle," but laziness does not seem to be an issue. Rather, the word commonly means the "disorderly" or "insubordinate," and can refer to civic disturbances. Perhaps some had reported the believers to the city magistrates after the believers had abandoned idol observance and the imperial cult. Maybe the magistrates took action against some believers, pressuring them to participate in cultic activity, perhaps by fining them. Or perhaps some, either believers or opponents, had tried to raise this situation in the city's popular assembly. Or perhaps this group of artisan believers had gone on strike to protest their treatment, causing civic disorder. (There is evidence from other cities of b a k e r s and l i n e n w o r k e r s s t r i k i n g , c a u s i n g disorder.) Interestingly, in 4:11-12 Paul advises them to live "quietly... and to work with your hands." The first phrase has the sense of withdrawing from political action and life, thereby diminishing conflict; the second might indicate Paul's exhortation to them to continue to work. Paul's general advice to the believers is to keep their heads down and not draw attention to themselves, thereby behaving "properly" toward outsiders (4:11-12). This approach will reduce the conflict. But he also reassures them that they are caught up in God's purposes (1:4; 2:12) and that their final salvation is sure (4:13-18; 5:9-10, 23-24). This commitment makes them different from their surrounding society concerned with idols (1:9-10) and lustful passions (4:4-5), and lacking hope (4:13). He constantly underlines their identity as a distinct family or household of "brothers and sisters." He refers to them by this term some seventeen times in the eighty-eight-verse letter (1:4; 2:1, 9 , 1 4 , 1 7 , and so forth; 5:12, 14, 25-27). And he urges them to sustain one another's 55 faithfulness in encouraging, upbuilding relationships (3:12; 4:18; 5:11, 13-14). He assures them of God's continued working among them (3:11-13). Continually he places their present circumstances in the context of God's endgame. Jesus' return means judgment on those who rule and who cause the believers trouble, and the establishment of God's purposes (1:10; 5:1-10). CorinthCorinth was the capital city of the Roman province of Achaia. The city had been destroyed in 146 BCE and had been resettled and rebuilt as a Roman colony in 44 BCE. By the 50s of the first century C E , the population was perhaps between 80,000 and 130,000, including some 20,000 in the area surrounding the city. The city was ethnically and religiously diverse. Its sociopolitical structure was typically hierarchical with a small number of elite families controlling the city's power and wealth. How did believers negotiate this Roman city? Paul had founded the church around the year 50 and stayed in the city about eighteen months (Acts 18:1-17). The community of Christ believers was ethnically mixed with Jews (1 Cor. 7:18; 9:2021) and Gentiles. The latter, with both Greek and Roman names, had converted from Greek and Roman religions (1 Cor. 8:7-10; 12:2). Though the Acts account indicates considerable conflict with the Jewish community, 1 Corinthians does not attest either external conflict with a synagogue or internal conflict between Gentile and Jewish believers. Although ethnic conflict seems to be absent, there are, nevertheless, significant conflicts and divisions within the church with groups committed to different leaders (1:10-11). These conflicts involve in part some issues of doctrine or spirituality, but mostly they seem to center on lifestyle issues resulting from decisions about how to negotiate their society. Some advocate significant accommodation, whereas others prefer a more separatist, even ascetic lifestyle. Paul opposes the former. The church comprised both elites and nonelites. In 1:26, Paul describes them by saying, "Not many of you were wise [or educated] by human standards, not many were powerful, not many were of noble birth." He means, of course, that some did belong to Corinth's elite group. We know the names of at least two of these 56 elite figures. Erastus held political office as city treasurer and spent his own money in an act of civic beneficence by paving an area east of the theater (Rom. 16:23). Gaius "hosts" the whole Corinthian church, suggesting that he was a leading patron for meetings either in his large house or in premises he hired. He is presented as an equal of Erastus and patron of Paul (1 Cor. 1:14; Rom. 16:23). There are other elites who are no longer identifiable, some of whom could well be women. Many of the issues Paul addresses concern to a significant degree elite behaviors and social practices. It seems that there are power struggles and debates over lifestyle among the elite believers as they maintain their elite societal status while they also work out the implications of their Christian commitments. Often we think of these Corinthian believers as being deliberately difficult for Paul. But it is more helpful to remember that they are in a new situation. They do not have centuries of Christian tradition to guide them about how to conduct their daily lives in an appropriate Christian manner. They learn as they go along. In the opening four chapters, Paul defends his way of speaking and preaching the gospel (2:1-5). Rhetoric was a major elite concern. It was not only a sign of education but it also provided a skill necessary for demonstrating one's prestige and for securing one's societal influence. Paul's refusal to employ a high rhetorical style has offended and embarrassed some who do not find him a worthy teacher but prefer Apollos. The nature of this dispute suggests that some elite members see the church as another place to extend their influence and gain honor for themselves in patron-client relations. In chapter 5, Paul criticizes a man for sexual immorality. His lack of criticism for the woman suggests she is an outsider. The fact that the church has done nothing about the man suggests he is a powerful elite member. In chapter 6, Paul complains that members are settling disputes in court. What the disputes are is not clear. In all likelihood they involve elites who are either in dispute with one anotherperhaps over property or business contracts or in dispute with nonelites (default on loans, nonpayment of rents, and so forth). In chapters 8 through 10 Paul discusses participation in cultic meals in temples. These meals may involve honoring the imperial 57 cult, celebrating the important Isthmian games centered in Corinth, or seeking the favor of another god or goddess. Such activity was crucial for elites to maintain their societal status. So, too, were private dining occasions where elite status, power, and wealth were demonstrated and reinforced (10:27-30). In 11:17-34, the Lord's Supper highlights the social divisions in the church. Some elite members celebrated the Lord's Supper in terms of their cultural practices of patronizing the meals and gatherings of artisan guilds and other collegia or associations. Meals commonly reinforced social hierarchy with differing qualities of food, tableware, and service for guests of different status. At the Lord's Supper elite members reflected social divisions and reinforced their status with abundance while othersthe nonelite did not have enough (11:18-22, 33-34). These practices attest to the presence of elite members among the Corinthian believers (1:26). They also suggest significant cultural accommodation and imitation. These believers import their cultural practices into the church and behave in conventional social ways. They continue their quest for honor, status, and power regardless of the gospel and the impact of their behavior on nonelites in the church. Unlike Thessalonica, the church has little conflict with its imperial cultural context because it copies it, not challenges it. Paul sees these behaviors as inconsistent with the gospel. The central issue, from Paul's perspective, seems to be not the problem of the church in the world, but too much world in the church. Paul challenges their behaviors and values. He urges more distance from their cultural practices and the formation of distinctive Christian practices about which they can be united (1:10-11). Fundamentally, he uses the narrative of Jesus crucified, risen, and returning, to help them understand the gospel and to relativize the claims of Corinthian culture. References to Jesus' crucifixion (1:17-28) and resurrection and return (chap. 15) frame the letter. Jesus' crucifixion shows the fundamental opposition of the Roman system, with its emphasis on noble birth, power, wealth, status, and public office, to God's purposes. The "rulers of this age," both human and cosmic, were responsible for Jesus' death. They do not understand Jesus' death as the revelation of God's power and wisdom, and as the turning point of the ages (1:1858 2:13). The current age with all its displays of elite power is passing away under judgment (7:29-31; 10:11). Jesus' resurrection means the new age is underway. The Corinthians' behavior is to be guided by the spirit until Jesus returns to establish God's empire and purposes over all opponents (15:20-28). Accordingly, Paul exhorts distance from cultural practices, though without total withdrawal from society (5:9-13). The distinctive practices that he wants will certainly disrupt and disadvantage elite members socially if they follow his teaching. In chapters 5 and 6, for instance, he demands action against the immoral man, regardless of his social status. In the church, elite members are accountable for their actions, not immune because of their status. The gospel requires moral and ethical standards that override cultural accommodations. He scorns their use of the courts by reminding them of their future role in the eschatological judgment (6:1-5). They should have their own judicial processes. In chapters 8 through 10 he forbids their involvement in cultic festivals and meals in temples. Such syncreticism is not compatib l e w i t h d i s t i n c t i v e C h r i s t i a n a f f i r m a t i o n s (1 Cor. 8 : 6 ) . Nonattendance at civic festivals presents a major challenge for elite members. But Paul does allow attendance at dinner parties with nonbelievers (10:27-30). He strongly rebukes their ways of celebrating the Lord's Supper shaped by cultural practices, and urges vastly different, much more egalitarian practices that honor the nonelite (chap. 11). Their worship is to reflect not cultural hierarchies but the Spirit's presence whereby all are graced and empowered to contribute to worship as expressions of love for one another (chaps. 12-14). The collection for the poor in Jerusalem introduces distinctive patterns of economic behavior (16:1-4). Paul invites them to be in solidarity with the poor not for their own advantages of patronage but to provide relief. In contrast to the empire's vertical and dominating structures centered on Rome, he urges relief that reverses the flow of taxes and tributes to Rome and that recognizes the solidarity of the Corinthiansincluding the elitewith other subject people. Throughout, in addition to reminding them of God's yet-to-becompleted purposes, he seeks to set in place an alternative communal identity and way of being in their world. He reminds them that they are a "holy" or "sanctified" people (1:2, 30; 3:17; 5:11; 6:11, 59 15-20; 10:8). The words mean "set apart" for God's just purposes, not for cultural imitation. Family or household language abounds, with "brothers and sisters" used about thirty times (1:10-11, 26, and so forth). He appeals to them to consider not their own advantage but each other's good (6:12; 8:7-13; 10:23-24), to forgo their rights as he has (chap. 9), and to be a unified though diverse body marked by mutual benefit, not hierarchy and domination (12:12-26). Paul's urging of cultural distance and distinctiveness (though not separation) is at odds with the imitation of imperial society that seems to be to the fore among some Corinthian believers, especially elite members. We do not know if the Corinthians, especially the elite, listened to Paul. But the situation that he addressed in the 50s indicated different believers negotiated their imperial society in different ways. PhilippiThe situation in Philippi seems to be closer to Thessalonica than to Corinth. Philippi was a small city, with about 9,000 to 12,000 people, and the church was probably small. Since Philippi was a colony (settled by Roman citizens), there were probably some Roman citizens in the congregation, but there is no evidence of elite members. Probably most members were artisans. In verses 1:27-30 Paul refers to conflict and struggle. They have opponents; they suffer for Christ's sake; their conflict is the same as the imprisoned Paul's (1:7); their opponents will be destroyed. The link to Paul's suffering suggests not just general opposition to preaching, but civic conflict and opposition from imperial officials (1:12-13). As with Thessalonica, the most likely scenario is that converts have abandoned the idols of their previous Greco-Roman gods and w i t h d r a w n from imperial cult celebrations. These actions have caused fear among acquaintances, fellow workers, and kin alarmed by the possibility of reprisal from angered gods and of disaster for their community. The opposition probably involves local pressureeconomic sanctions, verbal abuse, broken relationships, and perhaps occasional acts of violence. Local governing officials will also be involved if some Christians have refused oaths of loyalty to the emperor (as at Thessalonica). The Philippians seem to have responded in several ways. Paul urges them in 1:27-28 to "stand firm," be united, and not be intim60 idated. The commands are only necessary if the Philippians have responded with wavering, disunity, and fear. Disunity is evident in at least three different responses to the situation of civic conflict. To those who are afraid, Paul urges them to see their suffering as a participation in Christ's suffering and to remain steadfast (1:29; 2:8; 3:10-11). Others seem tempted to take on some signs of Jewish identity, whether from faithfulness to the scriptures, admiration for Judaism, or exemption from cult participation without civic conflict (3:2-11). Paul offers h i m s e l f as one w h o h a s renounced any reliance on status in favor of knowing "Christ and the power of his resurrection and the sharing of his sufferings" (3:10). A third response involves those who continue to participate in cult observances, perhaps as part of collegia or artisan gatherings. The language and contrasts of 3:17-21 indicate that Paul attacks idolatry, gluttony, and illicit sexual activity. This cluster of themes was stereotypically associated with attacks on collegia gatherings. These believers perhaps saw no incompatibility between their Christian understandings and conventional cultural behaviors. Paul's strong language labels them "enemies of the cross," and contrasts their citizenship of the Roman Empire with their heavenly citizenship and savior-emperor, Jesus. Paul's approach to their situation is to reject options two and three, and to support option one. He does not try to reduce the tensions and conflicts with their community. Rather, he exhorts them to continue to bear the suffering faithfully, as he and Christ did (1:2-26, 29; 2:17-18; 3:1; 4:4-9). He presents Jesus as a martyr, faithful and obedient even to death, but vindicated by God (2:611). In 2:4, support for one another involves economic assistance. Paul attempts to strengthen their community identity and boundaries as "brothers and sisters" (1:12; 3:1,13,17; 4:1, 8). He also uses military language; they are to be a united and faithful army (1:27). Underlining Paul's approach is his conviction that God's sovereignty triumphs over Rome. His arrest cannot prevent the gospel from being spread through the Praetorian guard (1:12-14). There are believers even in the emperor's household (4:22). In 2:9-11 he presents the risen, exalted Jesus as having all authority "in heaven and on earth," including over Rome. On "the day of Christ" (1:10; 2:16), Jesus will finally enact that authority when he triumphantly returns from heaven (3:20). 61 This confidence in God's sovereignty gives rise to the most radical part of Paul's response, namely recontextualizing their citizenship. Using cognate words, he twice underscores that their citizenship is in heaven. In 1:27 they are to live as citizens of the gospel; in 3:20 they are citizens of heaven. This does not spiritualize their citizenship. Rather this language of citizenship denotes an everyday living of appropriate values, practices, and commitments to the Roman Empire. Paul wants the gospel to shape their living. In 3:20 he contrasts imperial citizenship with heavenly citizenship, and contrasts Jesus as Lord and savior with the Roman emperor, who was also known by these titles. Paul is calling for an alternative loyalty that is conflictual and treasonous. In effect, he is asking those who are Roman citizens to renounce that status, just as Christ renounced his status and became a slave (2:5-11). He is asking them to live as slaves as he and Timothy are "servants [slaves] of Christ Jesus" (1:1). Slaves were of course the lowest members of imperial society with few rights or privileges. Paul's response here does not reduce their conflicts (as in Thessalonica), but ensures that they continue, if not escalate. Hence he exhorts them to be steadfast, faithful, and unified in their endurance until Jesus returns to complete God's purposes (3:20). He asserts that in the end, God's sovereignty over Rome's empire is sure. female image, it is called a "great whore" (17:1). The image has a long biblical history (Exod. 34:15-16). It denotes faithlessness to God's purposes through cultural accommodation and compromise (22:14-15). The city of Babylon is faithless to God's purposes, blasphemously ignoring them (17:3). It is greedy and economically exploitative (17:4; chap. 18; see chapter 6, below). It is violent and murderous, destroying those faithful to God (17:6; 18:24). It is arrogant, "glorifying herself" and claiming to rule the world (18:7). It is under judgment (chaps. 17-18). Accordingly, Revelation calls believers to "come out" of the whore Babylon so that they "do not take part in her sins" (18:4). The call is to refuse to participate in its civic, political, economic, and religious life. They are to resist and to live an alternative existence. They are to live in the new Jerusalem. The two cities coexist, with the new Jerusalem situated in the midst of Babylon's evil (22:14-15). The two contrast greatly: whore and bride (19:7); beast (17:3) and lamb (21:9); demons (18:2) and God (21:3); fruitless (18:14) and fruitful (22:2); intoxicated (18:3) and healed (22:2); murderous (18:24) and free from death (21:4). The new Jerusalem is God's gift from "heaven" (21:2). It is where God and people live together (21:3). It is free from suffering, pain, and death (21:4). It transforms all that is contrary to God's purposes (21:5). It is huge, with plenty of room for everyone and outdoing any known city (12,000 stadia long equals about 1,500 miles; 21:16). It does not h a v e a t e m p l e ( 2 1 : 2 2 ; see c h a p t e r 5, b e l o w , on t e m p l e s ) . Interestingly, it embraces both city and countryside since it includes, in echoes of the Garden of Eden, a river and tree that produce a continual supply of life (22:1-2; see chapter 6, below). This is God's alternative to Rome's empire. ConclusionIn this chapter we have seen some different ways in which Christians negotiated Rome's empire in their towns and countryside. Considerable diversity is evident in these responses. Paul's voice is loud in offering direction, as is the writer of Revelation. However, we do not know how many Christians in communities like Thessalonica, Corinth, Philippi, and the cities of Asia agreed with them or preferred other ways of negotiation. 63 CHAPTER 5 had extensive economic implications. It was finished in the early 60s CE just before Jerusalem and the temple were destroyed by Rome in 70 CE. Josephus and Matthew see this military-political destruction as God's punishment, though for different reasons. Josephus, a priest, sees it as the penalty for revolt (JW 5.559; 6.40912; 7.358-60), whereas Matthew regrettably views it as punishment for rejecting Jesus (Matt. 22:7). High-priestly families controlled the temple's daily operations. Powerful, wealthy, and privileged, these elite leaders were "entrusted with the leadership of the nation" (Josephus, Ant. 20.251). Their temple base supplied divine sanction for their societal power. They and the temple had an ambivalent interaction with Rome. They were entrusted with faithfully overseeing the temple's vibrant worship practices that celebrated Israel's distinctive covenant relationship with God. In recalling liberation from Egypt, for instance, the Passover festival sustained hopes of freedom from oppressive rule. Josephus notes that it was on "festive occasions that sedition is most apt to break out" (JW 1.88), a point not lost on Roman governors who stationed troops in Jerusalem to maintain order during festivals (JW 5.244). Yet the t e m p l e leaders also had to a c c o m m o d a t e R o m a n demands for loyalty and cooperation. Rome did not require local people to abandon their religious practices. It did, however, draw local religious observance into supportive relationship with the empire and exert some control over it. Rome commonly ruled through mutually beneficial alliances with local elites who would maintain the Roman-dominated status quo. The Roman governor appointed the chief priest (Josephus, Ant. 18.33-35, 9 5 ) . The Romans kept the chief-priestly garments in Jerusalem in the Antonia fortress, releasing them for festivals (Josephus, Ant. 15.403-8; 18.93-95). Sacrifices were offered in the temple for but not to the emperor and Rome (Josephus, JW 2.416). When war against Rome seemed likely in 66 CE, the "chief priests and notables" and "the principal citizens ('powerful ones'), chief priests, and most notable P h a r i s e e s " try to prevent it (Josephus, JW 2.410-24; Vita 21). They quickly assure Rome of their loyalty, appeal to those advocating war to desist from a hopeless undertaking, and ask the Roman governor to intervene with troops. These actions demonstrate the pragmatism and ambivalent position of being Rome's allies. 66 Priests exercised and benefited from economic power. The temple required agricultural products: wood, oil, grain, spices, wine, salt, lambs, bulls, oxen, rams, and doves (Letter of Aristeas 92-95). These supplies came from priestly estates (Josephus, Vita 422), trade, and "first-fruit" tithes paid by peasant farmers in kind (Neh. 10:32-39). The temple was part slaughterhouse in offering sacrifices, part warehouse in storing supplies, and part bank with storage chambers for wealth (Josephus, JW 5.200; John 8:20). Various Roman officialsthe governors Sabinus in 4 BCE, Pilate in the 20s, and Florus in the 60slooted the wealth in the temple treasury (Josephus, JW 2.50,175-77,293). Josephus notes that priests became rich from tithes and offerings (Vita 63); archaeological discoveries in Jerusalem confirm wealthy priestly dwellings. Jerusalem priests at times violently seized tithes even if poorer local priests starved (Ant. 20.181). The temple also collected a tax, which included those outside of Judea (Philo, Spec. Leg. 1.78; J o s e p h u s , Ant. 18.312). The emperor Vespasian co-opted this tax after 70 to rebuild the temple of Jupiter Capitolinus in Rome (Josephus, JW 7.218)! These taxes and tithes, plus those exacted by Rome and by local landowners, often amounted to perhaps 20 to 50 percent of a peasant farmer's yield. Indebtedness and expropriation of their land inevitably followed nonpayment. Ambivalent attitudes to the temple among nonelites are evident. Peasants paid tithes; some made pilgrimages to Jerusalem for festivals, and some participated in an extraordinary defense of the temple. In 40 CE, the emperor Gaius Caligula ordered a statue of himself, as Zeus, installed in the temple. Many, including priests and other leaders, launched a sustained nonviolent protest. They abandoned their houses and fields to confront Petronius, the governor of Syria, declaring that they would die rather than see the temple violated (Philo, Embassy 222-42). Others seemed not so pleased with the temple. Through the 60s CE, "a rude peasant" called Jesus, son of Ananias, announced the temple's demise, only to be beaten by "some of the leading citizens" and by the Roman governor (Josephus, JW 6.300-309). When war began in 66, violent protests expressed hostility and resentment against the Romeallied Jerusalem priestly leaders. Some nonelites attacked the high priest's house, killed him, and pursued "the notables and chief 67 priests/' burning the debt-record archives (Josephus, JW 2.426-29). To the consternation of Josephus and other priests, they ignored the elite priestly families to select by lot "ignoble and low born individuals" as new chief priests (Josephus, JW 4.147-61). challenges to their system (Matt. 23:23-36). He announces that they and their temple system are under God's judgment (23:38). murder; commit adultery; swear falsely; and worship other gods (7:5-9). He condemns them and their temple economy as robbers who steal the people's possessions and life. Jesus uses Jeremiah's phrase to condemn the temple system of his own day, presided over by the Roman-allied elite, including the chief priests. But the term "robber" can also be translated "bandit." In the time of Jesus there were a number of peasant-based groups of bandits. Experiencing increasing economic and social hardship, they claimed power under the leadership of a charismatic figure to plunder elite property and attack elite personnel. In a sharp reversal of roles, Jesus applies the term "bandit" to the Jerusalem leaders to charge them with robbing the people and destroying and pillaging the nation! As an alternative, Jesus sees the temple as a house of prayer. Jesus quotes from Isaiah 56:7 where the prophet envisions an inclusive people and temple embracing marginal persons such as eunuchs (cf. Lev. 21:16-23; Deut. 23:1) and foreigners. Jesus evokes this vision of welcome and merciful inclusion that describes his whole ministry. He has claimed to mediate God's forgiveness, healing wholeness, and social inclusion independently of the temple elite's attempts to control these blessings. In M a t t h e w ' s account (Matt. 21:14), Jesus heals the blind and the lame, mediating God's transforming presence to those that the elite had previously excluded from the temple (Lev. 21:16-24; 2 Sam. 5:8). The temple leadership perceives Jesus' actions and words in the temple as a rejection of their authority and temple system, and as offering a dangerous vision of alternative political, economic, and societal structures. In each Gospel, references to their attempts to kill Jesus or to his death follow immediately (Matt. 21:15, 23, 4546; Mark 11:18; Luke 19:47; John 2:18-22). They have too much to lose to tolerate such a challenge. Luke's Gospel begins with Zechariah undertaking his priestly duties in the temple. While doing so, an angel interrupts the temple liturgy to announce to him Elizabeth's conception of a son to be named John (1:5-23). The temple is a place of revelation. The Gospel ends with the disciples, who after encountering the risen Jesus and witnessing his ascension obediently return to Jerusalem. There they were "continually in the temple blessing God" (24:53). The temple is a place of worship. The opening chapters of Acts alternate scenes that locate the disciples in households and in the temple. In households they assemble, pray, choose another leader, receive the spirit, preach, welcome new converts, learn, share all things, break bread, and experience persecution (1:12-2:47; 4:23-5:11; 6:1-11; 8:3). They also frequent the temple (2:46), but increasingly experience conflict with the temple leaders. A healing and preaching about the risen Jesus (chap. 3), including the accusation that the leaders had killed Jesus (3:15), provoke the wrath of the temple leaders with imprisonment (4:3) and threats (4:21). Further healings and preaching lead to imprisonment and floggings (5:12-42). Stephen preaches against the temple and proclaims its destruction (6:13-14). The temple leaders stone him to death (7:59). Thereafter, the followers of Jesus generally have little to do with the temple but household gatherings become more important. Later, Paul is accused of preaching against and violating the temple (21:28-40). The consequences, narrated in chapters 21 through 28, show how deeply the temple is embedded in the Roman political system. The temple leaders ally with Roman officials to defend their common interests and the hierarchical status quo against a possible threat. These conflicts show the leaders to be opposed to preaching about Jesus as God's messenger, to healings as signs of God's purposes for life and wholeness, and to any threat to their power and the temple's hold on people's lives. fused Roman gods with local gods as long as loyalty to Rome was secured. For the Jerusalem temple, for instance, Rome exercised control through its allies, the chief priests, who agreed to sacrifice two lambs and a bull twice daily for Rome and for the emperor's well-being, but not to the emperor. The refusal by lower-ranked priests to offer these sacrifices in 66 CE, despite the chief priests' protests, was understood to be rebellion and contributed to the outbreak of war Josephus, JW 2.197,409-10). Accordingly, in Ephesus, capital of the Roman province of Asia and an important center for the early Christian movement, Rome did not resist the long-established and dominant worship of the goddess Artemis but ensured it was within Roman control. When Paul and other preachers of "the Way" in Ephesus encountered devotees of Artemis, they had to negotiate it not only as a religious phenomenon, but also as a central civic and imperial entity. Artemis was a mother and/or wife goddess who was often represented in statues with what seem to be many breasts. People of Ephesus regarded this mother goddess as the divine protector and sustainer of civic life in the city. She was understood to have saved the city. Her temple, the Artemisium, impressive in size and wealth and one of the seven wonders of the ancient world, visually displayed her importance and the high regard in which she was held. Not surprisingly, a number of inscriptions and coins from Ephesus show emperors often linking themselves with the temple. The temple of Artemis played an important role in the province's religious and civic life. The name Artemis was understood to derive from a Greek word meaning safe and happy. She was understood to be a merciful and benevolent goddess who satisfied people's needs. Through the temple personnel, she had power to give oracles and to intervene to improve situations. She was also understood to save people powerfully from cruel and indiscriminate cosmic powers. Preaching about Jesus rivaled such claims. The temple played a crucial role in the province's economic life. Those who came to Ephesus to admire the temple's magnificence or to seek the goddess's intervention required food, shelter, and offerings. Because it was understood to be a divinely secured location, the temple provided facilities for people to deposit wealth. The temple gained further wealth from the production of the temple's large estates, sacred ponds, and herds. People left bequests to 72 the temple in wills. Those who had sought the goddess's benefits made sacrifices, and those who had experienced her benefits made donations. Accordingly the elite overseers of the temple had substantial wealth from which to make extensive loans. Could a follower of Jesus take such a loan? In addition to this economic involvement, the temple was important for the city's civic and legal life. Inscriptions attesting decisions made by the city council were deposited in the temple. The use of the temple as an archive suggested the goddess's sanction for the elite-controlled administration of the city. The temple was also an asylum for those in political danger, for debtors, for fugitives, and those needing assistance for any reason. It is important to remember that elite figures and groups were responsible for the administration of religious, economic, civic, and legal matters. They fostered, and benefited from, Artemis's power and status. While Ephesus benefited from Artemis's presence, the city, notably its elite, understood itself to be responsible for the temple's administration and to be entrusted with the task of honoring the goddess (Acts 19:35). It did so in several ways. There were important annual festivals, one that celebrated Artemis's birth and one that celebrated her various roles. These festivals involved street parties, feasting, processions, and special cultic celebrations. There is also evidence of a procession every two weeks along a route from the temple outside the city into the city to its theater. The procession carried numerous statues of Artemis, and involved more than 250 people. Elite persons played crucial roles in funding, organizing, and executing these celebrations, providing entertainment, food, and personnel. Through involvement in these civic responsibilities they gained social honor and power. The silversmiths mentioned in Acts 19:24, who made silver shrines of Artemis, belong to this civic honoring of Artemis as protector of Ephesus. No such silver shrines have been discovered, but comparable items made of other materials existed as votive offerings. The loss of silver objects over several thousands of years is not surprising since it is a valuable metal. The silversmiths in Acts 19 probably constitute a trade group or guild of silver workers. An inscription from Ephesus confirms the existence of such a group. Demetrius, their leader, may well be their elite patron who gains honor from funding and perhaps 73 hosting their monthly gatherings and protecting their interests. An i n s c r i p t i o n has b e e n found that identifies a p e r s o n n a m e d Demetrius on the board of wardens of Artemis's temple. However, the name Demetrius appears in numerous inscriptions so it may not be this particular Demetrius. The silversmiths are concerned about the impact of Paul's two years of preaching (19:10), notably his claim that "gods made with hands are not gods" (19:26; compare his approach in Athens, Acts 17:22-34). Their concern is partly e c o n o m i c (19:24-27#). Their other concern centers on Artemis's honor; if Christian preaching successfully takes people away from honoring Artemis, her role in Ephesus will be diminished, she will be offended (19:27b), and tragedy might befall the city. That is, the special relationship between Ephesus and Artemis would be destroyed and Artemis's role in providing divine protection and sustenance for Ephesians lost. These concerns provoke cries of loyalty to Artemis (19:28), civic confusion, and a spontaneous gathering in the theater. They seize some of Paul's companions but not Paul (19:29-31). Further confusion and shouts of loyalty to Artemis follow (19:32-34), requiring the town clerk to assure the assembled crowd that the relationship between Artemis and Ephesus, the keeper or warden of Artemis's temple, is divinely secured and impregnable (19:35). He reminds them that there are recognized legal processes under Roman supervision for complaints (19:36-40). He dismisses the assembly and folk apparently leave peaceably, assured of Artemis's durability (19:41). A number of clues indicates how deeply embedded is this scene in Roman imperial realities. As I have noted, the focus on Artemis, her temple, and all the activities associated with it implies numerous key roles for elite members of Ephesian society. The prevailing mind-set of civic responsibility encouraged acts of euergetism ("civic good works") or social benefaction in which elites involved themselves in civic roles and in funding activities in order to enhance their own status and power. Such actions expressed loyalty to and sought favor from Roman officials, who rewarded leading citizens with further opportunities to enhance their wealth, honor, and power. It seems from verse 31 of Acts 19 that Paul has befriended some of the elite, officeholders called Asiarchs. This title refers to a vari74 ety of functions within the city's administration carried out by elite members; some functions associated with the powerful city council and some involving duties associated with the temple. No information clarifies how Paul had contact with these Asiarchs, and some interpreters think it is historically unlikely that Paul would have had such contact. But the narrative causes us to wonder how he might have met them. Perhaps they had heard him preach in the hall of Tyrannus (Acts 19:9). Some of them are described as being friends of Paul. The category of friendship indicates their obligation to Paul to keep him safe. The town clerk plays a leading role in the scene. He is a powerful member of the elite, associated with the city council and often responsible for record keeping. His importance is also reflected in that he addresses, calms, and dismisses the crowd (19:35-41). He does these tasks not by advocating for Paul, as many interpreters have mistakenly claimed, but by rehearsing central civic affirmations, defending Artemis, and maintaining Roman control. This Rome-friendly elite Ephesian upholds Artemis's central role in the city and honors the great value R o m e places on civic order. Artemis and Rome guarantee and protect elite control, which means elite wealth and power over the city. Accordingly he: Declares E p h e s u s ' s premiere role in being the keeper of Artemis's temple (19:35fl). Ephesus's relationship with and role in honoring Artemis will not change at all. Reminds them of the sacred stone or statue that fell from heaven, a reference to a sign from Zeus that legitimated Artemis's power. It refutes Paul's claims that Artemis was made by human hands and is therefore not a god (19:26,35b). Assures them that nothing can shake or contradict these bedrock affirmations (19:36 A ). The Demetrius-led civic disturbance is unnecessary (19:37). The town clerk does not think Christian preaching poses a threat. Indicates that Paul and his associates have not violated Artemis's temple as "temple robbers" or blasphemers (19:37). Reminds (and thereby rebukes) Demetrius and his crafts guild of recognized channels for expressing a complaint, namely the courts and the Roman proconsul (19:38), and the regular (rather than a spontaneous) assembly (19:39). Roman processes 75 protect Artemis. Ironically, Demetrius, not Paul, is the one charged with endangering civic order. Warns everybody of the great danger from the Romans of being perceived to have rioted (19:40). The town clerk, an elite ally of Rome charged with protecting the status quo, clearly seeks to prevent disorder and its consequences. The town clerk has good reason to be concerned about Roman intervention, especially in relation to disturbances concerning A r t e m i s . Twice in previous d e c a d e s R o m e had i n t e r v e n e d . Concerned that the right to asylum at temples was being widely violated whereby runaway slaves, debtors, and criminals found unwarranted protection, the emperor Tiberius (14-37 CE) and the senate required cities, including Ephesus, to petition for the right to extend asylum at Artemis's temple. The right was renewed but notice was served that the Artemis temple needed to ensure it upheld rather than undermined public order (Tacitus, Ann. 3.60-63). Also, around 44 CE the emperor Claudius and the proconsul Persicus addressed some corrupt financial dealings in the Artemis temple that had seriously reduced the temple's revenue. They issued decrees to prevent the situation recurring. Civic disorder would certainly bring further intervention. Why does Acts tell this story? How does it help followers of Jesus negotiate the Roman Empire? Often, interpreters claim the story declares the Christian movement's victory over goddesses like Artemis and over Rome's elite allies. Preachers like Paul, so the argument goes, are protected from unruly mobs and defended by elite civic officials. But such a reading is difficult to sustain and ignores the ways in which both Artemis and Rome exerted power over centers like Ephesus. Rather, the scene not only exposes fundamental conflicts between followers of Jesus and of Artemis, but also shows the difficulty and the threat of proclaiming the Christian message in such a context. Preaching can create opposition. Paul's preaching collides with civic claims and provokes Demetrius and a civic uproar. The scene does not indicate whether Demetrius pursues his complaints through the courts, proconsuls, or regular assembly. But given the gravity of his charges and the widespread support they elicit, it seems unlikely that the issue just goes away. Preachers need to be ready for a hostile response. 76 The conflict also silences preaching. Paul cannot reach the theater to address the crowd; the elite officials and friends dissuade him from attending (19:30-31). Gaius and Aristarchus do not get to speak (19:29). Alexander fares no better (19:33-34). Paul departs Ephesus soon after, leaving the believers to continue to negotiate life in the city on a daily basis (20:1). Perhaps they do so by not participating in anything to do with Artemis (15:19-20). But after Paul's unsuccessful attempt to confront the temple's activities directly, it seems unlikely that the scene encourages them to do so. Rather, they must live in this multireligious world, finding their own faithful place in it without necessarily expecting to overturn its civic and imperial structures. T h e r e w e r e n u m e r o u s o p p o r t u n i t i e s for b o t h e l i t e s a n d nonelites to participate in the imperial cult. Festivals had a central role in marking significant events such as an emperor's birthday, accession, military victories, and so forth. Festivals were multidimensional and offered numerous means for involvement, including processions through streets, prayers and hymns, oaths of loyalty, the offering of sacrifices (wine, cakes, incense, animals) at both central locations (council houses, squares, stadia, theaters, and so forth) as well as at small altars along the route. There were also street feasts, competitions, entertainments (gladiatorial displays, horseracing, animal fights, athletic contests), and distributions of food. Meetings of trade and artisan guilds or associations (silver workers, butchers, bakers, fish dealers, wool workers, and so forth) honored emperors at their regular gatherings by offering prayers, making sacrifices, and consuming meals involving food offered to gods. Imperial images were also located in households, especially elite households, where incense, sacrifices, and prayers were offered. In addition to public and temple monies, elites provided personnel and funding for many of these activities. These festivals unified populations, the cultic calendar provided societal rhythm and organization, and cult sacrifices were a common source of meat sold in the restaurants often associated with temples. How did Christians, for example, in the provinces addressed by 1 Peter (Pontus, Galatia, Cappadocia, Asia, and Bithynia; 1 Pet. 1:1) and in the seven cities in Asia addressed in Revelation (Ephesus, S m y r n a , P e r g a m u m , Thyatira, Sardis, Philadelphia, and Laodicea; Rev. 2 - 3 ) negotiate these observances? What did they do on festival days? Did they join in the feasts? Did they watch the processions? Did they participate in guild meetings, which were crucial places for social interaction and building economic networks? Did they join in offering incense or cakes or wine to an imperial image, or did they go late and avoid the sacrifices? These questions are difficult to answer. For example, 1 Peter instructs Christians to adopt behaviors that enable them to fit in with the norms of the rest of society. They are to "maintain good conduct among the Gentiles so that if they accuse you of wrongdoing they may see your good deeds and glorify God" (1 Peter 2:12, author's translation). They are to submit to human institu78 tions (2:13) because God's will concerns their "doing right" (2:15). Slaves are to submit to their masters (2:18), wives to their husbands (3:1). As part of this very conventional behavior, they are to be subject to the emperor (2:13) and honor the emperor (2:17). We have seen in the description above the "good conduct" of those who honored the emperor with sacrifices, festivals, feasts, competitions, and so forth. What did Christians do? Some interpreters have said that of course Christians would not be involved in offering sacrifices. But we cannot dismiss the possibility of idol worship too quickly. Some Christians in Corinth did not "shun idol worship," and Paul was willing to eat food offered to idols though others were not (1 Cor. 8-10). A decree sent out by the Jerusalem council forbids food offered to idols, but the prohibition would be unnecessary unless some Christians thought it acceptable to eat such food (Acts 15:28-29). The late-second-century Christian writer Tertullian, writing some one hundred years after 1 Peter, has to argue against Christians being involved in idolatry (De Idol 17.1). And the third-century Christian Origen knows Christians who bow before images, pretending to worship as was the social custom, but not doing so wholeheartedly (In Exodum Homilia 8.4). First Peter's emphasis on "good conduct" and submission may suggest an expectation that Christians would be involved in imperial celebrations. If a master led his household in offering incense to a household image of the emperor, Christian slaves would obey the letter's teaching on submission by participating. So, too, would a Christian wife. In neither situation does the letter include an exceptive clause, "submit except in circumstances involving sacrifices." It is interesting that while the letter employs many citations from the Old Testament, not once does it quote a prohibition against idolatry. It does forbid participation in "lawless idolatry" (4:3-4) but the prohibition is part of a list forbidding immoderate or excessive behaviors. The letter does not forbid Christians from having sex, or drinking any wine, or associating with friends; but it does forbid socially disruptive, excessive, and reckless practices. M o r e o v e r in 3:15, it e x h o r t s b e l i e v e r s to "in your hearts reverence Christ as Lord" (RSV). The heart is the center of a person's commitments and loyalties. To reverence Christ as Lord in one's innermost being is to recognize him as the one in whom 79 God's saving purposes are manifested. The heart is known only to God. Such an emphasis leaves open the possibility of assuming other roles, such as participating publicly in imperial cult celebrations and sacrifices as a social convention, without involving one's heart or innermost being. This is the practice noted by Origen (above) that some third-century Christians seem to have adopted as a way of coping with the expectations of their society while preserving their commitment to Christ. It could mean that Christians addressed by 1 Peter participated in honoring the emperor in street festivals or when incense was offered in a trade-guild meeting or household observance. RevelationUnderstanding what 1 Peter requires of its Christian readers may be both complicated and clarified by another document, the book of Revelation. Revelation addresses seven churches in Asia, part of the same area addressed by 1 Peter. Not only do their audiences overlap, but the two texts are written around the same time in the last few decades of the first century. Revelation is, in part, a letter to all seven churches (see Rev. 1:4-5). But it includes within it letters specifically addressed to each church. These individual letters comprise chapters 2 and 3. Significantly, in perhaps four of the seven letters, the writer of Revelation acknowledges that some church members participate in idol worship and expresses strong opposition to the practice. For example, in addressing the church in Ephesus, the stronghold of Artemis and location of an imperial temple (2:1-7), he notes that they "hate the works of the Nicolaitans, which I also hate" (2:6). Who are the Nicolaitans? In the letter to the church in Pergamum (2:12-17), also the site of an imperial temple, the writer strongly opposes those that "hold to the teaching of Balaam" to "eat food sacrificed to idols and practice fornication [immorality]" (2:14). "Immorality" may be a literal reference or it may be a common Hebrew Bible metaphor for idolatrous worship. Some in the church at Pergamum participate in worship of images, probably including images of the emperor. Verse 15 seems to sum up the previous verse by calling these people Nicolaitans. If this is correct, it clarifies the problem referred to in verse 6 in Ephesus. Some in the church in Ephesus also participate in worship involving 80 idols. The next letter, to the church in Thyatira (2:18-29), refers to its tolerance of a teacher who advocates the same two practices, eating food offered to idols and immorality (2:20-23). It is also possible that the reference in 3:4-5 to the few in Sardis who have not soiled their garments may also refer to involvement in worship of images. What is clear in these letters is that the churches in Asia are t h e m s e l v e s d i v i d e d over h o w to e n g a g e the e m p i r e . S o m e Christians take a very participationist and accommodationist approach, joining in imperial cultic activity. Others adopt much more distancing practices, probably involving some social and economic hardship. The writer of Revelation clearly supports the latter approach and condemns the former. He is adamant that there can be no compromise, no involvement in idol worship. One of the revelations of this book is that the empire is evil and under the power of the devil. Participation in the imperial cult means not just worship of the emperor but worship of the devil (chap. 13). The empire's political and economic life is under God's judgment (chap. 18). Interestingly, when this evil Roman imperial world is judged and destroyed by God, and when God's life-giving and just purposes are established in a "new heaven and a new earth" and in the new Jerusalem (21:1-21), there will be no temple in the city (21:22). With the end of the imperial world, there is no need for temples that are deeply embedded in political, social, economic, and religious systems that constitute the empire. One of the implications of Revelation's analysis is that believers must distance themselves from any civic festivals, guild meetings, and h o u s e h o l d o b s e r v a n c e s of i m p e r i a l w o r s h i p . B u t this approach is of course directly in conflict with 1 Peter's instruction to "honor the emperor." Christians in Asia are being instructed to adopt two quite different strategies for negotiating the empire. They have to choose between two quite different sets of practices. Should they participate freely in the empire's life while reverencing Christ as Lord in their hearts? Or should they withdraw from its demonic social, political, economic, and religious structures? It is q u i t e p o s s i b l e that the conflict e v i d e n t in the c h u r c h e s addressed by Revelation involves some who follow 1 Peter's accommodationist teaching and those whose opposition is supported by Revelation itself. 81 ConclusionIn the Roman world, religion was not a private matter. Rather, its observance was explicitly public, very communal, and quite political. Temples were not separate religious entities removed from the political, economic, and social world. The Jerusalem temple, the Artemis temple in Ephesus, and the multifaceted observance of the imperial cult in temples and cities across the empire were deeply embedded in Roman imperial structures. Christians were by no means in agreement on how to negotiate them. We have seen in this chapter a spectrum of responses embracing opposition, accommodation, and active participation. 82 CHAPTER 6 y the first century, an important set of theological ideas was at work that expressed and legitimated Rome's empire and power. The gods have chosen Rome. Rome and its emperor are agents of the gods' rule, will, and presence among human beings. Rome manifests the gods' blessingssecurity, peace, justice, faithfulness, fertilityamong those that submit to Rome's rule. Rome and its elite allies in the empire's provinces actively promoted these claims. They expressed their understanding that Rome's dominating place in the world was the will of the gods. These ideas justified efforts to force people into submission to Rome. These ideas justified the empire's hierarchical society, the elite's self-enriching rule, and its privileged existence. These claims also promoted "appropriate" ways of living for inhabitants of the empire, notably submission and cooperation. To submit to Rome was to submit to the will of the gods, and the means of participating in their blessing. That is, these claims had profound implications for how society under Rome's control was structured, and how people lived. Various "media" ensured that these expressions of and sanctions 83 for Rome's rule circulated widely throughout the empire. Coins, the handheld billboards of the empire, proclaimed them in every marketplace with images of imperial figures and gods and goddesses. So did statues of imperial figures. Festivals announced them while celebrating the emperor's birthday or succession or military victory. Imperial and military personnel were the face of this divinely sanctioned empire and the agents of the gods' will. Archways or gates and imperial buildings declared them. Numerous writers, usually writing for literate elite audiences, repeated them. The poet Virgil, for example, has Jupiter appoint Romulus to found Rome and its empire for which Jupiter declares, "I set no bounds in space or time; but have given empire without end" to Romans who will be "lords of the world" (Aeneid 1.254-82). Later Anchises announces to Aeneas that Rome's mission is "to rule the world... to crown peace with justice, to spare the vanquished, and to crush the proud" (Aeneid 6.851-53). Around the time of Paul's mission, Seneca has the emperor Nero declare: "Have I of all mortals found favor with Heaven and been chosen to serve on earth as vicar of the gods? I am the arbiter of life and death for the nations" (Clem. 1.1.2). The Jewish historian Josephus has Rome's puppet Agrippa recognize that "without God's aid so vast an empire could never have been built up" (JW 2.390-91). Tacitus has a Roman governor remind the leader of a German tribe that "all men had to bow to the commands of their betters; it had been decreed by those gods whom they implored that with the Roman people should rest decisions what to give and what to take away" (Tacitus, Ann. 13.51). Claims of Rome's election as the agent of the gods were made not only for the empire as a whole but also for specific emperors. Prior to Jesus' ministry, the emperor Augustus (31 BCE-14 CE) actively promoted these views, as did Vespasian, the emperor around the time that the Gospels were written. After much political instability in 68-69 CE, Vespasian emerges as the triumphant emperor. Suetonius notes various omens and signs that he claims indicate the gods' choice of and favor on Vespasian. One of these signs comprised a dream in which Nero was "to take the sacred chariot of Jupiter Optimus Maximus from its shrine to the house of Vespasian" (Vespasian 5.7). This dream was understood to signify the transfer of Jupiter's favor from the emperor Nero to 84 ____ Imperial Theology Vespasian as his divinely chosen successor. When Vespasian becomes emperor in 69 CE, the civil war of 68-69 ends and a year later his son Titus destroys Jerusalem and ends the rebellion in Judea. Vespasian issues coins that proclaim his coming to power as the work of several particular deities. Some coins depict Jupiter with a globe bestowing worldwide rule on Vespasian. Other coins p r o m i n e n t l y depict the g o d d e s s e s p e a c e (Pax) and victory (Victoria or Nike). These depictions present his reign as the will of the gods as well as announce particular divine blessings that he manifests among his subjects. In referring to the emperor Domitian ( 8 1 - 9 6 CE) the poet Statius highlights his representative role as Jupiter's agent, declaring, "At Jupiter's command he [Domitian] rules for him the blessed world" (Statius, Silvae 4.3.128-29). And in referring to the emperor Trajan (98-117 CE), Pliny identifies the gods as "the guardians and defenders of our empire" and prays to Jupiter for "the safety of our prince" (Pan. 94). This Roman imperial theology claimed that the gods through Rome's elite-controlled empire were sovereign over the world, had the right to direct it, and could determine what sort of human society, interactions, and behaviors should result. Compliance with Rome's rule was encouraged by presenting the empire's order as divinely sanctioned. For followers of J e s u s , these claims presented p r o b l e m s . C h r i s t i a n s f o l l o w e d o n e w h o m the e m p i r e h a d c r u c i f i e d . Crucifixion was the empire's ultimate way of removing a person who challenged or threatened the empire. Christians understood Jesus to be Lord, not Jupiter. They understood that Jesus manifested the kingdom or reign or empire of God, not of Jupiter and Rome. How were they to negotiate this web of interlocking ideas, the empire and society they legitimated, and the daily behaviors and practices they shaped? We will look at three New Testament writers who contest and imitate these claims with alternative theological and societal visions. PaulAs we saw in chapter 4, Paul addresses his letters to small communities of followers of Jesus in urban centers throughout the 85 empire. These communities often struggled to determine appropriate interaction with their surrounding civic communities. How should they negotiate the claims about Rome's divinely sanctioned role? Should they participate in festivals that honored the emperor? What should their attitudes and practices be toward imperial officials, festivals, and propaganda? Paul does not urge these Jesus communities to remove themselves from their cities or to turn their backs on civic affairs. He does not advocate escape from or dismissal of the political-civicsocietal challenges of the empire. Nor does he urge them to employ violent tactics to overthrow the empire. Rather, he helps them negotiate these civic settings and imperial claims so as to remain faithful to God's purposes for the world. By emphasizing their special identity in God's purposes that are not yet complete, he reinforces their group identity and boundaries as distinct from, yet as participants in their surrounding community He also frames their present challenges in the larger cosmic context of participating in God's just purposes for the world that, while not yet complete, will ultimately be victorious. That is, Christians belong to God's empire (Rom. 14:17; Phil. 3:20). Notions of covenant significantly influenced Paul's theological thinking. God was faithful to God's promises to Israel as God's people and to bless all of God's creation with life (2 Cor. 1:20). Moreover, Paul was an apocalyptic thinker who understood that God's purposes were not yet completed. At the imminent return of Jesus, God would end this world shaped by sin and death and e s t a b l i s h G o d ' s g o o d a n d l i f e - g i v i n g p u r p o s e s for a l l . Fundamental to these claims was the conviction that the sovereignty of the world belonged not to Jupiter and Rome but to God (Rom. 1:18-32; 11:33-36; 1 Cor. 8:6; 10:26 quoting Ps. 24:1, "The earth is the Lord's."). And God's universal and inclusive sovereignty was worked out in inclusive, ethnically mixed communities that provided communal experiences and practices alternative to the empire's elite-dominated, hierarchical, and exclusionary societal structures. Paul sees the gospel from and about God (Rom. 1:1) revealing God's sovereign purposes in Rome's world. Paul's gospel and communities present a significant theological challenge to Rome's claims. Fundamental to his gospel is the claim that there is one God (Rom. 3:27), the creator of all (Rom. 1:18-32). 86 "There may be so-called gods in heaven or on earthas in fact there are many gods and many lordsyet for us there is one God, the Father" (1 Cor. 8:6). Jupiter/Zeus was commonly called Father (Virgil, Aeneid 1.254, "the father of men and gods"), and the emperor was known as "Father of the Fatherland." He was seen as a father having authority over and blessing the members of his large (submissive) household that comprised his empire. Over against these claims Paul draws on Hebrew Bible traditions to identify Israel's God as the father of believers (Deut. 32:6; Jer. 3:19-20; Rom. 1:1, 7b). In Galatians 4:8 he dismisses these "socalled gods" as "beings that by nature are not gods," and in Romans 8:38-39 declares that all cosmic powers are powerless in relation to God's loving, saving actions. For believers, there is "one Lord, Jesus Christ" (1 Cor. 8:6; Rom. 1:1). Again Paul uses language here that was commonly used for the emperor ("Lord"). Paul's constant use of language closely associated with imperial power, and his redefinition of these terms with Christian content, indicates a direct challenge to the gospel of Caesar. Paul's attack not only dismisses polytheism, but also confronts Roman imperial theology, challenging the divine sanction for the empire. If there are no other gods, and only one divine Father, Rome's claims to rule and shape the world according to the sovereign will of Jupiter and the rest of the gods is exposed as empty. Christians could find here every reason for not participating in imperial rituals in houses, guild meetings, or civic festivals. Further, Paul's analysis of the world reveals that the world under Rome's power is not ordered according to God's purposes. It does not recognize God's sovereignty. It ensures misplaced loyalties whereby people worship creatures not the creator (Rom. 1:18-32). Paul calls idols or images, which must include those of emperors, the dwelling place of demons (1 Cor. 10:20-21). Worship of idols expresses the failure to acknowledge God; this failure is accompanied by destructive social relationships (Rom. 1:29-31). This world is ruled over by powers hostile to God's purposes, namely sin and death (Rom. 6:9, 14); flesh (Rom. 8:7); and Satan (Rom. 16:20). This present age under Rome's rule (contrasted with the coming age of God's reign) is evil (Gal. 1:4). It is marked by "ungodliness and wickedness" (Rom. 1:18). Its wisdom is folly compared to God's ways (1 Cor. 2:6). This is a scathing condemnation of Rome's hierarchical, exploitative, and legionary empire. 87 God's intervention, though, is bringing this situation to an end (Rom. 16:20). Paul's view is clearly at odds with claims that the emperor had saved the world and instituted the golden age blessed by the gods. The notion of the golden age, the saeculum aureum, was especially associated with the emperor Augustus (died 14 CE). It referred to a social order marked by virtue and tranquility and achieved through war, triumph, and domination. During the 50s, the time of Paul, Seneca employs it in his work "On Mercy," written to instruct the emperor Nero (54-68 CE). Seneca presents Nero as the only hope to rescue the world from sinfulness through his "merciful" rule! Seneca does not imagine for one moment the collapse of Rome's empire but writes to uphold it. Paul has another agenda. God "the Father of mercies" (2 Cor. 1:34), not Rome, will faithfully bring life to all people (Eph. 2:4). God's empire and justice will save the world (Rom. 14:17). God's agent in asserting God's sovereignty is Jesus. Paul focuses on Jesus' death, resurrection, and return. Jesus' faithfulness to God's purposes results in his crucifixion. Rome used crucifixion as a form of torture that removed threats to the imperial system and intimidated others into submissive compliance. Paul names "the rulers of this age" (1 Cor. 2:8) as those responsible for Jesus' death. The phrase has been interpreted to refer to either heavenly powers or human rulers. More likely it refers to both, designating the imperial agents and the supernatural powers at work behind the scenes. Paul's proclamation of "Christ crucified" (1 Cor. 1:23; 2:2) r e v e a l s the p r o f o u n d a n t i p a t h y b e t w e e n G o d ' s p u r p o s e s , expressed in Jesus, and the imperial world. Its rulers employ violence to protect their order and power against Jesus' threat. Jesus undergoes the fate of many enslaved by the emperor who dare to envisage a different order (Phil. 2:7). Despite claims of "eternal Rome" that will rule its empire forever, the cross also reveals the limits of Roman power. Rome cannot keep Jesus dead. God gives "life to the dead" (Rom. 4:17). Jesus' resurrection anticipates the destruction of the ruling powers (1 Cor. 2:8), the general resurrection, and the establishment of God's empire over all (1 Cor. 15:20-28). God will end this unjust and idolatrous imperial system at the final "coming" of Jesus (1 Thess. 2:19; 3:13; 4:15; 5:23; 1 Cor. 15:23). Paul again takes an imperial term, parousia, which commonly referred to the arrival of an imperial official, general, or 88 emperor (e.g., Josephus, JW 5.410, Titus), and applies it to Jesus arid the establishment of God's purposes. Paul identifies "the Lord Jesus Christ," who will come from heaven to accomplish these purposes, as the "savior" (Phil. 3:20). Again he uses a term "savior" (soter) that was widely used for the emperor (Josephus, JW 3.459, Vespasian). By using it for Jesus, Paul indicates that he does not think Rome and its emperors have saved the world from anything. Rome's claim to have brought security and safety, to have effected deliverance from danger (soteria), is false. Rather God saves the world from Rome and its false claims. At Jesus' coming, in a vision that imitates imperial triu m p h s , " e v e r y ruler and e v e r y a u t h o r i t y and p o w e r " are destroyed; "all his enemies" are put "under his feet" and subjected to God's reign (1 Cor. 15:23-28; Phil. 2:5-11). This "coming" of Jesus (1 Thess. 4:15), this "day of the Lord" (5:2), will take place at an unknown time. Jesus will invade the Roman world where people declare "there is peace and security" (1 Thess. 5:3). This phrase openly evokes Rome's boast to have gifted the world with these blessings (Josephus, JW 6.345-46). The Pax Romana ("Roman peace") was celebrated, for example, on the Ara Pacis Augustae in Rome, the Altar of Augustan Peace. This cube-shaped monument, with highly decorated walls, witnessed to Rome's victories in wars that derived from its faithfulness to its god-given mission to rule the world. Faithfulness produced military victories, which produced peace. Peace meant submission to Rome enforced by military might or negotiated through treaties and alliances. "Peace" and "security" described a world under elite hierarchical control and ruled for the benefit of a few. Paul critiques this imperial world as "night" and "darkness" (1 Thess. 5:5). It is contrary to a world ordered according to God's just purposes for well-being (salvation) for all people. In the time before Jesus' coming, Paul sees God at work in the midst of and over against Rome's world. He names God's working as "grace and peace" (1 Thess. 1:1). Grace is God's powerful free gift that creates peace, a world marked by wholeness and justice for all people. In the meantime, believers participate in God's purposes with lived faithfulness, love, and hope for God's imminent salvation from such a world, which will be accomplished at Jesus' coming (5:8-11). In Romans, he declares that God is at work now, powerfully and 89 faithfully, for salvation (Rom. 1:16-17). This is the gospel, the good news that reveals the justice or righteousness of God through God's faithfulness (1:16-17). Paul declares: "I am not ashamed of the gospel; it is the power of God for salvation to everyone who has faith [or faithfulness], to the Jew first and also to the Greek. For in it the righteousness [or justice] of God is revealed through faith for faith." These verses in the opening chapters of Romans sum up the letter's central claim. Significantly, the two verses are full of words that were commonly used imperial terms. Again Paul confronts imperial claims, denying their legitimacy by contrasting them with God's significantly different purposes of justice for all. Good news: This term often denoted the empire's benefits such as an e m p e r o r ' s birth, military conquest, or accession to power (Josephus, JW 4.618). In the tradition of Isaiah (especially Isa. 40 and 52), Paul uses the same language to speak not of Rome's so-called blessings but of God's saving activity and the establishment of God's reign or empire in place of Rome's (Isa. 52:7). To believe the gospel is to commit to and to be obedient to God as king or emperor (Rom. 1:5). Salvation: This term also named the blessings of Rome's world, especially its security and order achieved through deliverance from all threats and dangers. But this order, of course, was n o t h i n g other than benefit for a f e w R o m e ' s m i l i t a r y powerand enforced submission for most. Again evoking the tradition of Isaiah, Paul presents an alternative reality in which God's saving power frees from imperial powers (Isa. 45:17; 46:13) and creates wholeness or well-being for all (49:6; 52:10). Righteousness or Justice: Paul's gospel is a challenge to Rome, and he uses the imperial-sounding language of victory to affirm God's inevitable triumph (1 Cor. 15:57). But at least one factor suggests Paul is not just imitating the empire. What God is doing is fundamentally different. Rome proclaimed its mission to give justice to the world "to crown peace with justice" (Virgil, Aeneid 6.851-53; Acts of Augustus 34). There was a temple in Rome to Iustitia, the goddess Justice understood to be at work through Rome. Roman justice, however, was inevitably an agent of its imperial system. It functioned to sustain the control of the elite over the rest by punishing and removing 90 threats (like Jesus) to its power. Paul sees the gospel, not Rome, as revealing the justice (or righteousness) of God. And this justice is not punitive, self-serving, benefiting only the elite. This justice comprises God acting rightly or faithfully to God's covenant purposes announced in the promise to Abraham to bless all the nations of the earth (Gen. 12:1-3). God's action in the world is to make things right for all people, "to the Jew first and also to the Greek." This work is under way in Jesus' death and resurrection in which, by raising Jesus, God overcomes Roman injustice. This "right-making," justice-bringing work will be completed at Jesus' return. Faith/fullness: God's actions involving salvation or justice or r i g h t e o u s n e s s derive from G o d ' s faithfulness. T h e y are expressed through the faithfulness of Jesus (Rom. 3:21-26) and encountered in human faithJhilness (often translated "believing" or "faith") that embraces lived trust, commitment, loyalty, and obedience (Rom. 1:5). Paul uses language that was central to imperial claims. The goddess Fides, loyalty or faithfulness, was understood to be active through the empire's rulers. The emperor represented Rome's loyalty or faithfulness to treaties and alliances (Acts of Augustus 31-34). But such loyalty required a reciprocal loyalty comprising submission to Rome's will and cooperation with Rome's self-benefiting rule. Paul announces God's faithfulness to vastly different purposes (justice for all) and invites hearers of the gospel to entrust themselves to those purposes, loyally participating in God's justice-bringing work. Paul sees this theological challenge to Roman claims taking societal shape. God's work, proclaimed through Paul's mission, shapes communities that embody a different identity and alternative practices as participants in God's purposes. The Philippian believers represent God's purposes on earth though their citizenship or "commonwealth" is in heaven. Whereas they belong to the abode of God from which Jesus will come, they live now as a colony of foreigners or resettled veterans in foreign territory. Paul commonly addresses the communities as ekklesia (1 Cor. 1:2; Gal. 1:2; Philem. 2). The term echoes both the language of the Greek form of the Old Testament (the Septuagint) for the assembled people of God, as well as the citizen assembly of Greek-speaking cities in the eastern Roman Empire. The term presents Paul's churches as rival assemblies. 91 He also frequently uses household language to denote their identity and relationships. With God as their father, they are brothers and sisters (Rom. 12:1; 1 Cor. 1:10-11). They are to show "familiar' love for one another (Rom. 12:10). These assemblies are to exhibit different social relationships, replacing the exploitative social and gender hierarchies of the empire with more egalitarian and caring relationships (Gal. 3:28). Meals are to represent these different relationships (1 Cor. 11:17-34) as does the alternative economic practice of the collection from Gentile churches for the poor in Jerusalem (1 Cor. 16:1-4; 2 Cor. 8-9; Rom. 15:25-33). It should be noted, though, that as much as Paul outlines this ideological and social alternative and challenge to the empire, he is also deeply influenced by this world of empire. He imitates imperial concepts in his presentation of God's overwhelming power. He celebrates his ministry as always being led in triumph (2 Cor. 2:14). He employs his own "imperial" and patriarchal authority to demand loyalty and obedience from the churches (1 Cor. 4:15). He enjoys the patronage of those who support his ministry (Phoebe, Rom. 16:1-4). He declares that slavery does not matter (Gal. 3:28) but does not seem to work against it. He urges submission to Rome in the difficult passage from Romans 13:1-7 that we will discuss in chapter 8. Yet throughout he also announces God's judgment on Rome's empire, and God's alternative life-giving and just purposes. The communities are to live as participants in God's purposes as communities of resistance and solidarity with those oppressed by Rome's power until God establishes God's purposes at Jesus' return. GospelsTheologically and socially the Gospels also contest these claims that the gods have chosen Rome to manifest the gods' sovereignty, presence, agency, and blessings. Matthew SovereigntyMatthew's Gospel asserts repeatedly that the world belongs not to Rome at Jupiter's behest, but to God. God's sovereign purposes are being asserted over Rome's. 92 Matthew's opening genealogy reviews Israel's history by highlighting three big events that reveal God's sovereign purposes (1:1-17). God promises Abraham that through him God will bless all the nations of the earth (Gen. 12:1-3). God promises to David a kingdom that will last forever (2 Sam. 7:14). But the third major event, the fall of Jerusalem and exiling of leaders to Babylon in 587 BCE, seems to put these purposes at risk. The loss of land, the destruction of Jerusalem, and the exiling of its leadership seem to be the end of any blessing for others, let alone of an eternal kingdom. Verses 12 through 16 indicate, however, that God's purposes continue with a surprising return from exile. Imperial power cannot divert or defeat God's work. In between these major events, God works through all sorts of charactersmale and female, good kings and bad kings, Jews and Gentiles, the important and the marginalto c o n t i n u e G o d ' s p u r p o s e s in J e s u s the Christ. Significantly, Rome is not included in this review. God asserts sovereignty in the conception of Jesus through the Spirit (1:18-25). God commissions Jesus to manifest God's saving presence in a world of sins (1:21-23). Rome's empire does not order the world according to God's purposes. True to form, one of Rome's agents challenges God's work in chapter 2. King Herod, in power as Rome's puppet king, uses his allies, the Jerusalem-based leaders, and the magi from the east to attempt to kill Jesus as a threat to his rule. However, God protects Jesus by using angels and dreams to thwart Herod's efforts. Three times the chapter ironically notes Herod's death (2:15,19,20). In 4:1-11, the devil challenges the outworking of God's purposes by testing Jesus. The heart of the temptations concerns whether Jesus will be loyal to God's purposes as God's son and agent (3:13-17), or whether he will obey the devil. In the third temptation (4:8-9), the devil offers J e s u s "all the k i n g d o m s [empires] of the world" if Jesus will obey the devil. This offer is a stunning assertion of the devil's sovereignty over the world and its empires. It reveals an alliance between the devil and Rome and unveils the devil as the power behind Rome's empire. Several verses later, in Rome's devilish world, Jesus begins his public ministry with a counterassertion. He announces God's sovereignty with the words, "the kingdom [empire] of heaven has come near" (4:17). The word "kingdom" or "empire" (in Greek, 93 basileia) is the same word the devil used in 4:8 and is a common word for Rome's empire. The phrase ''kingdom [empire] of heaven" sums up Jesus' commission to manifest God's saving presence. The rest of the Gospel elaborates God's empire or saving presence in scenes that show the assertion of G o d ' s sovereignty over human lives in the calling of disciples (4:18-22; 9:9); over diseases (4:23-25; chaps. 8-9); the wind and sea (8:23-27); demons (8:28-34; 12:28); sin (9:2-8); death (9:18-26; chap. 28); and over the Jerusalem temple and Jesus' opponents, the ruling group allied with Rome (chaps. 21-22). Jesus' language asserts God's sovereignty as "Our Father in heaven" (6:9) and "Lord of heaven and earth" (11:25). He teaches disciples to pray for God's sovereignty to be established: "Your kingdom [empire] come. Your will be done, on earth as it is in heaven" (6:10). His resurrection asserts God's sovereignty over both death and Rome's power. Rome is not able to keep Jesus dead. The risen Jesus declares that he shares with God "all authority in heaven and on earth" (28:18). The ultimate assertion of God's sovereignty comes when Jesus returns as Son of Man. In 24:27-31, his return echoes Daniel 7 where God destroys all empires and establishes God's neverending empire. Jesus destroys Rome's army (the eagle, 24:28) and the cosmic deities that supposedly sanction Rome's empire (24:29). Judgment over all people (24:31; 13:39-42) assesses whether people have fed the hungry, clothed the naked, and cared for the sick and imprisoned (25:31-46). Attending to these tasks is how disciples are to live until God's sovereignty, not Rome's, is established over all. As much as Matthew uses this eschatological expectation to contest Rome's sovereignty, it should be noted that the Gospel imitates imperial ways in this scene with the violent and forced imposition of God's empire over all people. PresenceMatthew's Gospel disputes the claim that Rome and the emperor manifest the presence of the gods. Rather it asserts that God's presence to save and rule the world is manifested by Jesus. In three very strategic locations, the Gospel asserts God's presence is manifested in Jesus. In 1:22-23, Jesus' commission to save from sins is elaborated with a citation from Isaiah 7:14 (and Isa. 8:8, 10) that identifies Jesus as " 'Emmanuel,' which means 'God is with 94 us.' " This opening statement frames the Gospel's whole narrative. All of Jesus' actions and wordshis teaching, healings, feedings, meals, exorcisms, conflictsmanifest God's saving presence. The citation from Isaiah 7:14 highlights another dimension. Isaiah 7 through 9 concerns a threat to the southern kingdom Judah from the northern kingdom Israel and its ally Syria. God offers King Ahaz and his people a sign of God's presence with them and of their salvation. The birth of a baby, the next generation, promises their deliverance from the imperialist threat. This future, though, requires their present trust in God. Evoking this story interprets the circumstances of Matthew's community. They, too, live with an imperial threat. The baby Jesus is a sign to them of God's presence with them and deliverance from that threat. They, too, must trust God to work out God's saving purposes. The second explicit statement of God's presence manifested by Jesus occurs in the middle of the Gospel in 18:20. Jesus promises to be present with the community of disciples gathered for prayer. Significantly, this assurance comes as part of a chapter that is often called "the community discourse." In chapter 18, Jesus spells out the sort of community that disciples who are committed to God's empire constitute. This community welcomes and cares for the vulnerable and least (18:1-14), practices reconciliation (18:15-20), and extends forgiveness (18:21-35). These commitments to mercy, inclusion, service, and reconciliation differ greatly from the empire's commitments to domination, exploitation, self-enriching rule, and submission. Jesus' presence constitutes an alternative societal experience. The third explicit statement of God's presence manifested by Jesus comes at the close of the Gospel (28:18-20). The risen Jesus sends his disciples in mission to the world under Rome's power. But unlike Rome's mission to dominate and subdue, disciples are to announce and enact God's life-giving purposes and presence revealed by Jesus. Jesus promises to be with them "always, to the end of the age." His presence guides them in their discipleship, but also anticipates the final establishment of God's purposes. AgencyThe Gospel challenges the imperial claim that the emperor and Rome are agents chosen to manifest the gods' sovereignty, will, and presence among humans. It presents Jesus as God's chosen 95 agent, commissioned to enact God's saving presence and lifegiving empire among humans. As we have noted, the very name given to Jesus denotes his commissioning to be God's agent. The angel of the Lord instructs Joseph to name him "Jesus" because "he will save his people from their sins" (1:21). His name, used some one hundred and fifty times in the Gospel, constantly articulates his identity as agent of God's purposes. The Gospel also employs various "titles" for Jesus that denote his identity as agent of God. The opening verse identifies him as "Christ" (1:1, 17). This term, the Hebrew form of which is Messiah, means to be "dripped on" or "anointed." Anointing with oil signified that a priest (Lev. 4:3, 5), king (Ps. 2:7), prophet (1 Kings 19:16), and even the Gentile ruler, the Persian Cyrus (Isa. 44:28; 45:1), were set aside or commissioned by God for special roles. Some, but by no means all, Jewish traditions expected various types of messiah figures. Some of these figures would be anointed or commissioned to free the people from Rome (Ps. Sol. 17; 4 Ezra 12:32-34) or to have a role in establishing God's empire (1 Enoch 46-48). By identifying Jesus as Christ, the Gospel denotes him to be God's chosen agent. Other terms express a similar claim. The Gospel identifies Jesus as God's son (2:15; 3:17; 4:3, 6; 11:25-27; 16:16). For first-century Christians, this term denotes one who is in special relationship with God and is an agent of God's purposes and will. For example, in the Hebrew Bible, the term son denotes the king (Ps. 2:7), Israel (Hos. 11:1), and the wise person (Wisd. of Sol. 2), all of whom represent God's purposes. As God's son, Jesus is the agent of God's saving presence and empire (1:21-23; 4:17). He enacts God's will in his words and actions. Those who commit to Jesus continue this task of being agents of God's purposes (10:7-8; 28:19-20). They are called "sons" or "children" of God. They make peace, not based on military power, but on God's justice (5:9). They love and pray for their enemies and persecutors rather than destroy them (5:44-45), thereby embodying God's indiscriminate love for all people. Blessing or Societal Well-Being The empire claimed that as the agents of the gods' sovereignty, presence, and will they brought well-being or blessings of peace, fertility, harmony, security, safety, and so forth to the world. Matthew does not accept this elite view and exposes it as false. 96 Rather, it is God's work in the world through Jesus and his followers that manifests God's blessing, namely God's empire (4:17; 5:3, 10), good news (4:23), and justice/righteousness (5:10, 20; 6:33). Like Paul, Matthew uses vocabulary often used in imperial claims. The Gospel reveals the world under Rome's rule to be a desperate, not a blessed, place for most inhabitants. The Gospel is peopled with sick folk (4:23-24; chaps. 8 - 9 ) . Jesus brings healing. Rome's world is peopled with folk under the control of demons (4:24; 8:28-34). Jesus' exorcisms bring deliverance. Rome's world is a hungry place. Disciples pray for daily bread (6:10). Twice Jesus heals and feeds large crowds, supplying them with abundant food (14:13-21; 15:29-39). The Sermon on the Mount, the first teaching discourse in the Gospel, opens with Jesus' declaration of blessings that result from the establishment of God's empire (4:17; 5:3-12). The first beatitude blesses the "poor in spirit." Matthew does not spiritualize the beatitudes and bless a "spiritual" condition. Rather, Jesus has just healed numerous sick people (4:24-25). Their sickness has rendered their already poor and desperate lives even more precarious. Poverty is never only a physical phenomenon; it destroys a person's very core. It eats away at their spirit. Jesus declares these "poor in spirit," the materially, literally physically poor that comprised some 97 percent of Rome's world, blessed. Why are they blessed? God's empire is at work already to restore the world to God's just purposes, and these purposes will be established. Similarly, in 5:5 Jesus blesses the meek. The meek are not to be understood as the wimps or the doormats. Rather Jesus quotes from Psalm 37 in which the meek are the literal poor who are exploited by the powerful and wealthy and deprived of their land. Jesus quotes the repeated promise of Psalm 37 that God can be trusted to restore to them land, the basic resource needed for survival. The beatitude anticipates the eschatological completion of God's purposes. The beatitudes also express God's blessing on those who live according to God's purposes in the present. Those who hunger and thirst for justice, who are merciful, who are pure in heart, who make peace and pay the consequences in opposition experience God's favor. Their actions fundamentally oppose values and practices of the empire. They participate in an alternative societal reality. In 20:25-26, Jesus contrasts this way of life based on mercy 97 and service with that of the empire. In contrast to the "rulers of the Gentiles" and their "great ones [who] are tyrants," the community of disciples identifies itself with the powerless and vulnerable. As slaves they are to seek the good, not the goods, of others. Like Paul, Matthew challenges Rome's claims theologically and envisions an alternative societal experience in which God's sovereignty, presence, and blessing are encountered now and in the future through Jesus, God's agent. LukeWhereas the focus has been on Paul and Matthew, I will briefly note one more example in which a Gospel contests aspects of Rome's claims for divine sanction. The opening chapters of Luke's Gospel introduce Jesus in language that disputes Rome's claims. The angel announces to Mary Mary's conception of Jesus and God's commissioning of Jesus as agent of God's sovereignty. "The Lord God will give to him the throne of his ancestor David. He will reign over the house of Jacob forever, and of his kingdom there will be no e n d " (1:32-33). Contrary to Rome's claim of divinely sanctioned rule that lasts forever, Luke recalls the promise to David of a kingdom that lasts forever (2 Sam. 7). And contrary to Rome's harsh and exploitative rule, Luke recalls the tradition that David is an agent of God's merciful and just rule (Ps. 72). Different sovereignties, agencies, and understandings of societal well-being clash. Mary continues the theme in her hymn of praise, commonly called the Magnificat. " (Luke 1:46-56, selections). Mary's words celebrate God's overthrow of Rome's world. Luke emphasizes the birth of Jesus in the world ruled by the emperor Augustus and the governor Quirinius of Syria (2:1-2). All inhabitants participate in a census, the basis by which Rome levied taxes and tribute (2:1-5). The angel announces to shepherds the birth of Jesus using language that, as we have discussed above, contests Rome's claims: "I am bringing you good news of 98 great joy for all the people: to you is born this day in the city of David a Savior, who is the Messiah, the Lord" (2:11, emphasis added). The announcement presents Jesus' birth, not the emperor's, as good news. Jesus, not the emperor, is Savior and Lord. Jesus, not the emperor, is the rightly anointed agent (Messiah) and king in the line of David, entrusted with representing God's purposes. And those purposes do not reserve blessing for the privileged, powerful, wealthy few, but extend it to all people. As we noted in chapter 2, Jesus begins his public ministry in Luke's account by quoting Isaiah 61. With this quote he declares his ministry to be God-given and himself to be the agent of God's blessing that will transform Rome's:18-19) The language of "release" and "year of the Lord's favor" recalls Leviticus 25. This chapter announces a Jubilee year every fifty years in which slaves are freed, debts cancelled, and land returned to original owners. The Jubilee year was a mechanism for preventing a society from developing that was dominated by the wealthy and powerful. Rome's world is not God's will. Jesus announces that God's activity to save and transform this world is under way. Like Paul and Matthew, Luke offers a theological and societal challenge to Rome's claims. ConclusionRome asserted divine sanction for its empire, claiming that the gods had chosen Rome to manifest the gods' sovereignty, presence, agency, and blessings on earth. New Testament writings dispute Rome's claims, asserting over against them that God's purposes will eventually hold sway over human affairs. Paul's Letters and Gospels like Matthew and Luke present Jesus as the agent of God's sovereignty, presence, will, and blessings in the present and future. Disciples of Jesus are to continue his role in the meantime. 99 CHAPTER 7 ome's empire influenced every aspect of a person's life. In this chapter, we will look at some ways the early Christians and New Testament writers negotiated three everyday issues: supporting themselves (economics), feeding themselves (food), and caring for themselves (sickness and healing). EconomicsSome 2 to 3 percent of the population possessed most of the empire's wealth. The overwhelming percentage of the empire's i n h a b i t a n t s lacked it and struggled c o n s t a n t l y to sustain a subsistence-level existence. The struggle was cyclic. They knew times when there was enough (or even a little surplus) and frequent times when there was too little. I observed in chapter 4 that the empire's wealth was based in land ownership. Elites controlled the production, distribution (trade), and consumption of its products. That is, the economy was embedded in and reflected the hierarchical and oligarchical sociopolitical structures of the empire. We also saw that the elite used taxes, rents, loans, interest, tribute, and trade to redistribute production from peasant farmers, artisans, and unskilled workers to themselves. The ruling few gained considerable wealth, enjoyed 100 lavish lifestyles, and consumed much of the production. The majority's hard manual work sustained the excessive lifestyles of the few. That is, economic structures were exploitative and unjust. MatthewIn this context of lack, how are people to live? Is wealth evil, something to be hoarded or something to be redistributed? Matthew's Gospel, like numerous New Testament writings, warns about the dangers of wealth. While talking about the "exceeding justice" (5:20) that is to mark the life of disciples committed to God's empire, Matthew's Jesus urges disciples to share whatever possessions they have. They are to give to those who beg (the desperate, 5:42#), to those who want to borrow (the equally poor, 5:42b), and to those in need (6:1-4, like themselves). Jesus encapsulates this making available of their limited possessions to others in his subsequent instruction not to "store up" (acquire, value) possessions (6:19-21). This communal responsibility reflects hearts focused on "heavenly treasures," doing the will of God (6:19-23). It is the way of life for a people who have decided against serving mammon (property, possessions), but have enslaved themselves to God's just purposes (6:24). It is the sort of behavior that anticipates the full establishment of God's purposes (25:31-46). These same issues are evident in Jesus' encounter with the young "rich man" (19:16-30). This unnamed man, one of the elite, has "many possessions" (19:22-23). He asks Jesus about "eternal life," a synonym for entering life (19:17), being perfect (19:21), entering the empire of heaven (19:23-24), and being saved (19:25). He wants to participate in God's purposes. In response to Jesus' questions about his social ethic, he declares that he has kept the commandments against murder, adultery, stealing, and bearing false witness, while honoring his parents and loving his neighbor (19:18-19). His answer, though, reveals his commitments to wealth and not to God's justice. His "many possessions" indicate that this elite man has deprived others of what they need (stealing). He has not loved his neighbors. Jesus offers him a program that, if followed, would dismantle the high-status world of the empire's powerful and wealthy (19:21). Jesus lays out a process of repentance that begins with 101 selling the man's possessions. Presumably the man's possessions include his land, slaves, house, and investment properties, markers of his status and power. Then he is to divest himself of his wealth by giving it to the poor, those lacking resources, despised and exploited by the elite. This is an act of restitution that reverses the transfer of wealth from nonelites to elites and anticipates the redistribution of resources that will happen when God's purposes are fully established (cf. 5:5). Then he is to join the community of followers of Jesus in new social relationships marked by shared resources. But the man declines Jesus' invitation (19:22). He is one who, like the seed sown on rocky ground, prefers the "lure of wealth" (13:20-22). Imperial wealth, not God's empire, rules his heart (6:24). He upholds the imperial system. By contrast, the disciples have left everything to follow Jesus (19:27). This is not a literal statement since its speaker, Peter, has a house and family (8:14-15). However, it does signify different priorities. Jesus promises reward in the redistribution of resources and reshaping of community that mark the full establishment of God's purposes (19:29). JamesThe Letter of James addresses a community experiencing significant economic oppression in an unknown location. They are identified in 1:1 as "the twelve tribes in the Dispersion." If this address is taken literally, they might live anywhere outside Palestine. If it is taken metaphorically, it would refer to their marginal location vis-a-vis civic, political, and economic rights. The community, committed to Jesus (2:1), comprises mostly poor nonelites. The letter refers to the lowly (1:9); widows and orphans (1:27); the poor (2:2, 5-6); women and men who live by alms (2:15); those who do business (5:13); rural laborers and harvesters (5:4); and small farmers (5:7). These folk suffer various injustices. Widows and orphans were vulnerable in an androcentric world that required male protection. The poor have dirty c l o t h i n g and e x p e r i e n c e s o c i a l p r e j u d i c e ( 2 : 2 - 5 ) . T h e y are "dragged into court" (2:6). The verb "dragged" suggests physical or legal violence. Some, men and women, lack clothing and food (2:15). Rural laborers are not paid their wages (5:4). 102 The letter attributes this suffering not to deficiencies of character (i.e., laziness), ethnicity, or gender, but to their oppressors. The rich dress elegantly and display their wealth with fine adornment, rather than using it to assist the poor (2:2). They oppress the poor in court (2:6). They blaspheme the name of God or Jesus that identifies this community of the poor (2:7). The rich accumulate and consume wealth (5:1-3). These wealthy, powerful landowners deny rural laborers their wages (5:4). They live luxuriously and pleasurably (5:5). They condemn and murder the righteous poor (5:6), probably a reference to the effect of withholding wages and depriving a household of necessary resources. Their oppression is social, legal, and economic. There are also "conflicts and disputes" among the group (4:1). It is not clear what the conflicts are over. They may be class-based between the poor and the rich, but it is not clear that the oppressive rich belong to the group. Perhaps the conflicts involve the poor and the not-quite-so-poor mentioned in 4:13-17. These latter folk have some business skills and opportunity for making money in other towns. The letter rebukes them for their presumption about the future (4:14), their lack of attention to the Lord's will (4:15), their arrogant boasting (4:16), and their failure to do the right thing, namely provide for those in need (4:17). Or, the conflicts may arise within the oppressed poor. In 2:1-4, for example, some of the poor who seem to have internalized the practices of their society are rebuked for imitating its deferential behaviors and dishonoring their fellow poor. In 4:1-10 the letter's audience is rebuked for being "friends of the world" rather than friends of God. Friendship with the world opposes God's purposes and seems to comprise imitating or desiring cultural values and practices rather than God's alternative way of life. In 4:11-12 they are forbidden to speak evil against one another. They are to listen to one another, being slow to speak and slow to anger (1:19). In response to their oppression and conflict, the letter seeks to sustain lives that are faithful to God's purposes. (1) It assures them of God's preference for and presence with the poor (2:5). Rahab the prostitute is an example of a culturally marginal and despised person w h o m God vindicates (2:25; see Joshua 2). God has chosen the poor and has promised them 103 participation in God's empire (2:5). The current distress is not God's making (1:13). God will vindicate them when God's purposes are finally established (1:12; 5:7-8). (2) Conversely, the letter assures the poor of the inevitable demise of the rich and powerful who are under God's judgment. God brings the rich low, for they disappear like a withered flower in scorching heat (1:10-11). As friends of the world, the rich are enemies of God (4:4). Future miseries, the end of their wealth, and destruction await them (5:1-6). (3) In the meantime, the poor are to form a faithful community. Nineteen times the letter uses "brother and sister" language to secure their identity (1:2, 9, 16, 19; 2:1, 5, 14-15; 3:1, 10, 12; 4:11 [3 times]; 5 : 7 , 9 , 1 0 , 1 2 , 1 9 ) . It calls them to "love [their] neighbor as [themselves]" (2:8) and to show mercy (2:13). (4) This community is to be marked by perseverance (1:3-4, 12; 5:11). This perseverance is not passivity or resignation, even t h o u g h major c h a n g e s in the e m p i r e ' s p o l i t i c a l , l e g a l , a n d economic structures are not forthcoming. Rather, it is a form of resistance in that it refuses to be broken down by the oppressive circumstances. In 1:4 this perseverance affects maturity or perfection in faith through participating in God's purposes in the present. In 1:12 it means future participation in God's life when God's just purposes are finally established. Endurance means trusting God to complete G o d ' s purposes (5:7-9). In 5:11 J o b models endurance. Job refused to accept his suffering as normative, vigorously protesting it and demanding God's justice. (5) The community is also to be marked by integrity. The writer urges them to consistency between their confessing and their living. They are to be hearers and doers of the word (1:22-24). In 2:1-7 the writer points out that their favoritism toward the rich is inconsistent with God's preference for the poor, and challenges them to consistency. Likewise, there should be consistency between their faith and their works (2:14-26), and between their faith and their words (3:2-12). Such integrity of speech removes the need for oaths (5:12). (6) The community is to practice nonviolence toward their oppressors (5:6). Nonviolence does not mean ready compliance. The letter offers no instruction to obey rulers or submit to the wealthy and powerful. Instead of deference or retaliation, they are 104 exhorted to endurance (above) and peacemaking (3:18). As with Matthew's beatitude (cf. Matt. 5:9), peacemaking means living for the wholeness and well-being that result from God's purposes. Of course, such peace differs greatly from the Pax Romana ("Roman peace") from which the powerful wealthy benefit. (7) The community is to pray in its suffering (5:13). They are to pray for the sick (5:14-15) and for one another (5:16), thereby securing their communal relations. Elijah provides an example of powerful and effective prayer causing God to withhold and supply rain for the harvest. Prayer, then, is another strategy against the wealthy's unjust hoarding of resources that cause some to not have enough to wear or to eat (3:15). RevelationIn Revelation chapter 18, the announcement of the downfall of Rome's empire particularly emphasizes its economic oppression. How does this announcement address the churches in seven cities in the province of Asia (Rev. 2-3)? The chapter begins with an angel declaring, "Fallen, fallen is Babylon the great!" (18:2). Naming Rome "Babylon" echoes the prophet Jeremiah's condemnation of the Babylonian Empire (Jer. 50-51). The choice of Babylon reminds readers that this previously dominant empire has passed from the world scene because of God's judgment. Rome will experience the same fate. Another prophet, Ezekiel, had also declared judgment on Tyre's vast trade and economic empire (Ezek. 26-28). As I noted in chapter 4 above, Babylon/Rome is also identified as a whore (17:1, 5, 15, 16; 19:2). The image denotes faithless activity that benefits only Rome and corrupts others; "The kings of the earth have committed fornication with her" (18:3). The image of illicit sexual activity ("fornication") commonly appears in prophetic condemnations of people unfaithful to God's purposes. The use of these images for Rome's empire, and particularly its economic activity, presents it as self-benefiting, exploitative, harmful to others, and under God's judgment. The further references to Babylon/Rome as a "dwelling place of demons" (18:2) and of deceiving the nations by "[her] sorcery" identify it as demonic and bewitching (18:23). Revelation locates Rome's illicit economic activity within the larger context of Rome's imperial rule secured by military power 105 and religious practices. In chapter 13, Revelation describes Rome as a beast with "authority over every tribe and people and language and nation, and all the inhabitants of the earth will worship it" (13:7-8fl). The imperial cultdiscussed above in chapters 4 and 5presents Roman political power as sanctioned by the gods and fosters submissive compliance. Moreover, another beast "causes a l l . . . to be marked on the right hand or the forehead, so that no one can buy or sell who does not have the mark" (13:16-17). Economic activity means participation in this political-militaryreligious power. Chapter 18 emphasizes the same interconnections. Verse 7 connects economic extravagance ("she lived luxuriously") with the idolatrous self-glorification of religious observance ("she glorified herself") and the political control of an eternal empire ("I rule as a queen;... I will never see grief"). The chapter ends by making explicit in verse 24 the fourth dimension, the vicious military conquest on which the empire was founded: "And in you was found the blood of prophets and of saints, and of all who have been slaughtered on earth." Economic oppression is embedded in the empire's political-religious control and sustained by its military viciousness. The chapter announces God's judgment on Rome with plagues, pestilence, mourning, famine, and fire (18:4-8). Judgment is the present activity of the Lord God who is "mighty," superior in power to Rome (18:8). This announcement of judgment brings forth three laments or funeral dirges from three groups who have vested and invested interests in maintaining Rome's oppressive status quo. They lament the loss of the source of their wealth. The first lamenters are "the kings of the earth" (18:9-10). In Psalm 2 the "kings of the earth" resist God. The designation identifies Rome's allied rulers theologically as opponents of, but no match for, God's purposes. Rome commonly formed alliances with local elites such as client kings (see the discussion in chapter 3, above), as well as members of ruling classes in provinces and cities throughout the empire. These political and economic allies benefited from Rome's power. By committing "fornication" with the whore Babylon they acquired luxurious lifestyles. Their lament emphasizes Rome's great power ("great city"; "mighty city"), but in noting Rome's rapid demise ("in one hour") they ironically attest its weakness before God's powerful judgment. 106 The second lamenters are the merchants or traders (18:11-14,1517a). These are not members of the elite, though they have benefited greatly from the empire's power and economic reach (18:15 A). Often elites participated in trade indirectly through investment and through supplying agrarian-derived products. Merchants played a crucial role in moving goods from the provinces to Rome, the center of the empire. Whereas "all roads lead to Rome," even more so did all ships. Shipping was less expensive than road transportation and it could carry greater quantities. The first of their two laments focuses, self-centeredly, on the loss of Rome as a market: "No one buys their cargo anymore" (18:11). Verses 12 and 13 identify twenty-eight items brought to Rome from the provinces. Marty of the listed items were expensive items that serviced the conspicuous consumption of excessive elite lifestyles. The list is wide-ranging, including highly valued decorative items (gold, silver, precious stones, pearls); textiles (fine linen, purple, silk, scarlet); scented or citrus wood (used especially for making expensive tables); items made from ivory and from costly wood; metals (bronze, iron); marble; spices (cinnamon, amomum, myrrh, frartkincense); food items (wine, olive oil, wheat); animals (cattle, sheep, horses); transportation (chariots); and human slaves. Among the luxury goods are everyday items, especially those of food. By one estimate, Rome needed six thousand boatloads of grain per year to arrive at its port Ostia to keep Rome fed. Paul travels on one such boat from Alexandria in Acts 27:6. But while the merchants lament the loss of a market, items on the list reveal Rome's exploitative practices that result from its grasping power. Military defeat, greed, and taxation ensure the transfer of goods from the provinces to Rome. Gold and silver, for instance, were procured from Spain where the mines had become state property, often through confiscations. Citrus wood grew along the North African coast and was greatly depleted by the end of the first century. The huge demand for and widespread use of ivory had a similar destructive impact on elephants in North Africa. Cinnamon, like other spices and pearls, probably derived from outside R o m e ' s empire in the Far East (India, C h i n a ) . Romans, however, thought it originated with southern Arabian merchants who, it seems, deceived Roman merchants so as to protect their supplies from grasping Roman hands. Wheat and wine 107 were frequently procured by taxes and tributes paid by provinces in kind. And the final reference to slaves disguises both the practice of coerced labor and the huge trade in human misery involving those taken as prisoners in war (70,000 from the Jewish war of 66-70, according to Josephus, JW 6.420), children and adults sold into slavery, exposed infants, and voluntary enslavements. The list represents losses from provinces and often the labor of numerous provincials who received minimal compensation. The merchants continue their self-centered lament (18:14-17a). Without lamenting the city itself, they now lament the loss of such great wealth. Their description of the city recalls the description of the whore in 17:4, and rehearses items from the list of traded goods in 18:12-13. Economic exploitation defines Rome. The third group of lamenters comprises those who travel the seashipmasters, seafarers, sailors, and traders (18:17b-19). They benefited from and effected the massive movement of goods to Rome. The destruction of "the great city" is a serious blow for their interests since they too "grew rich by her wealth" (18:19). Like the merchants, they note Rome's rapid demise "in one hour." The judgment is total. In contrast to the list of Rome's exploitative trade (18:12-13), the angel catalogs its destruction (18:21-23). All sounds and activities end. There is no more music, artisan activity, food production, or light, human interaction. The chapter's audience is invited to rejoice "for God has given judgment for you against her" (18:20). This rejoicing contrasts the mourning and weeping of the kings (18:9), the merchants (18:11), and the mariners (18:19). How does this judgment scene function for the seven churches of the cities in Asia addressed by Revelation (chaps. 2-3)? Why include the laments of the merchants and mariners? Will the audience mourn or rejoice? The chapter's address to the churches is clearly stated. A voice from heaven instructs them, "Come out of her, my people, so that you do not take part in her sins, and so that you do not share in her plagues" (18:4). The heavenly voice calls them to distance themselves from their culture and its economic practices. This emphasis is similar to that of the letters in chapters 2 and 3. It is likely that a m o n g the c h u r c h e s in Asia were p e o p l e involved in trade and transportation. Perhaps some in the church 108 at Laodicea, for example, had accumulated significant wealth while others (as in the church of Smyrna) relied on work related to trade and transportation for daily survival. They saw no problem with this activity. It was necessary for their survival. The writer of Revelation disagrees. The angel and heavenly voice reveal the exploitative nature of Rome's economic activity and announce God's judgment on it. It is not just a matter of trade and artisan groups paying homage to the emperor in their gatherings. Their very participation in the imperial economy compromises them. The writer thinks their negotiation of Rome's economy is too accommodated. He presents them with a stark challenge in chapter 18 that requires them to change their way of living regardless of the cost. The chapter calls them to turn away from their cultural accommodation. It requires them to disengage from benefiting from the empire. They are to distance themselves from its activity. He does not, however, spell out his alternative. His emphasis falls on consequences. If they don't heed his call to "come out," they too will be caught up in the judgment. This is a costly challenge for some in the churches. For the writer of Revelation it is a lifeand-death matter. We do not know if members of the seven churches complied with the writer's command. FoodThe New Testament writings engaged another everyday matter that involved negotiation of the empire, namely food. One scholar has shown that every chapter of Luke's Gospel contains references to food. The three synoptic Gospels (Mark, Matthew, Luke) feature Jesus' meals, including his last supper. Two of the seven "signs" that Jesus performs in John's Gospel involve providing wine (John 2:1-12) and feeding a crowd with bread and fish (6:1-14). Paul and Peter have a major confrontation and falling out over eating companions (Gal. 2:11-14). Paul rebukes the believers in Corinth for their divisive and humiliating meals (1 Cor. 11:17-34). James 3:15 and 1 John 3:16 urge providing for the hungry and needy. Rome's all-consuming trade that siphons off products from the provinces includes food (Rev. 18:13-14). It is important to understand this concern with food within the context of the Roman imperial system. Food was about power. 109 Its production (based in land), distribution, and consumption reflected elite control. Accordingly, the wealthy and powerful enjoyed an abundant and diverse food supply. Quality and plentiful food was a marker of status and wealth, another indicator (like clothing, housing, transport, nonmanual labor, education, and so forth) that divided elites from nonelites. It established the former as privileged and powerful and the latter as inferior and of low entitlement. The latter struggled to acquire enough food as well as food of adequate nutritional value. For most, this was a constant struggle. And it was cyclic whereby most dropped below subsistence levels at times throughout each year. Food, then, displayed the injustice of the empire on a daily basis. The irony of this situation was that Roman propaganda claimed that one of the gifts of the Roman Empire to its inhabitants was fertility and abundance! It is difficult for many of us in an age of well-stocked supermarkets, numerous restaurants, pervasive fast-food outlets, cookbooks, refrigerators, frozen and packaged food, obesity, faddish diets, and underreported starvation to understand problems with the food supply. The first-century world, though, was quite different. Famines were relatively rare because both elites and nonelites had strategies to prevent them. Elites provided handouts and controlled distribution; peasants diversified crops and stored any surplus. But whereas famines were rare, food shortages of varying intensities were frequent. They resulted from factors such as: nature: unfavorable weather, poor yields, crop disease, seasonal variations; agricultural practices: overcropping of land; the market: high prices, limited supply; political events: war, taxes, tribute, the priority of supplying Rome before other areas; distribution: attacks by pirates, poor storage, speculation by traders, self-interested elite control of storage; location: cities with surrounding areas unable to sustain a large urban population, distance from suppliers, distance from ports. Our sources, mostly from elite authors, pay little attention to everyday struggles to procure food. They do, though, note times of special strugglesome local, some more regionalthat were prob110 ably serious enough to threaten elites. The actual suffering for nonelites was of course much greater. The decades of the 40s and 50s CEthe time of Paul's missionseem especially difficult. Early in the 40s the emperor Claudius authorized significant expansion of Rome's port Ostia after a shortage of grain. Hooding of the Nile in Egypt in the mid-40s caused damage to grain crops that seriously disrupted supply to Rome. Syria, Judea, and Jerusalem experienced severe food shortages around 4 6 - 4 8 , probably the "worldwide" famine prophesied by Agabus in Acts 11:27-30. The second half of the 40s and early 50s also saw flooding and crop loss in Greece. Evidence from Corinth attests that during the 40s and 50s an elite figure named Tiberius Claudius Dinippus was appointed three times as curator annonae. This costly office involved managing the limited grain supply in a time of crisis. The appointed person functioned as a benefactor. He used his own resources (and his influence to persuade other elite figures to do likewise) to purchase expensive grain supplies for the city. Dinippus's threefold appointment and honoring in inscriptions attest repeated shortages, Dinippus's considerable wealth, and a task satisfactorily undertaken. Some interpreters have seen Paul's reference to "the impending crisis" in 1 Corinthians 7:26 as a possible reference to a food shortage. When elite authors do refer to food shortages, their central concern is usually not human suffering. Their primary concern is often the civic disturbances that inevitably accompany food shortages. Frequently the object of the urban crowds' desperation was elite officials, landowners, and their property. They were typically suspected, and not without some reason, of hoarding supplies, depriving nonelites of food, and driving up prices. Elites realized that such civic disorder, likely plunder, and personal injury threatened the hierarchical status quo over which they presided and from w h i c h t h e y b e n e f i t e d . P r a c t i c a l r e s p o n s e s v a r i e d . Occasionally price ceilings were fixed. Elites sponsored handouts. Cities appointed an official to oversee the crisis. Rarely, though, did cities establish emergency supplies. MatthewNew Testament texts reflect and negotiate these realities in various ways. Probably addressed to followers in Antioch in Syria, 111 Matthew's Gospel, for example, mentions various aspects of the production of food: large landed estates, vineyards, slaves (20:116; 21:33-43), fishing (4:18-22), and manual labor (11:28-30). It names basic food items (bread, fish, wine, grain). It recognizes that food divides the powerful and the powerless. The ruler Herod Antipas, Rome's ally, enjoys a sumptuous birthday party (14:1-12) while others worry about what they will eat and drink (6:25-34). Some beg (20:29-34). There are food shortages (24:7). Jesus attacks the Rome-allied, Jerusalem-based leadership for its control of the food supply. In describing the "harassed and helpless" nonelite as "sheep without a shepherd," he employs an image commonly used for rulers (9:36). The image especially recalls God's condemnation of the leaders in Ezekiel 34 for ruling the people with "force and harshness" (Ezek. 34:4). The rulers eat plentifully (34:2-3, 8) while they devour the sheep through harmful policies and practices (34:10). Ezekiel envisages their replacement with an agent of God's rule that will supply abundant food (34:13-31). Matthew applies this critique to Rome's imperial system. In Matthew 12:1-8 Jesus opposes the ruling group's attempt to forbid gathering food on the Sabbath. In 15:5-6 he rejects their efforts of encouraging gifts to the temple that deprive the vulnerable elderly of resources needed for food. In 23:23-24 their focus on tithing "mint, dill, and cumin" looks ridiculous compared to their neglect of weightier matters such as justice, faithfulness, and mercy. Attention to these last three matters would see a radical reform of the empire and of the food supply. But of course elite guardians of the empire are not interested in reform. For his challenges, Jesus dies. How are followers of Jesus to negotiate this world? Instead of imitating urban culture and appealing to wealthy civic benefactors, the Gospel places responsibility with each disciple (compare the similar response to the famine in Acts 11:29, and Paul's collection in 1 Cor. 16:1-4). Disciples are to share whatever food they have with those who are hungry. One of the traditional "acts of mercy" required of disciples (almsgiving, 6:2-4) comprises providing food (Prov. 25:21; Tob. 1:16-17). Food is shared not to enhance o n e ' s h o n o r and reputation, but for the good of the other. Disciples are also to fast (6:16-18). Fasting usually means forgoing 112 food. However, the prophet Isaiah describes "true fasting" as countering injustice, freeing the oppressed, feeding the hungry, and providing hospitality to the homeless and naked (Isa. 58:6-10). Supplying food to the hungry, the "least of these," is a criterion for judgment (25:35, 37, 42, 44). Such work, required of all disciples regardless of levels of resources, constitutes those who are blessed as hungering and thirsting for justice (5:6). It contributes to alternative social and economic interactions. The instruction to pray, "Give us this day our daily bread," sustains such work and frames it in the context of prayer for the coming of God's empire and the d o i n g of G o d ' s will " o n e a r t h as it is in h e a v e n " ( 6 : 9 - 1 3 ) . Supplying food is God's will. In addition to these strategies, the Gospel includes two scenes in which Jesus enacts God's purposes for fertility and abundant food. Twice Jesus feeds large crowds (14:13-21; 15:32-39). He takes a limited human supply (a desert; large numbers; few resources) and produces so much food that all are fed and there are leftovers. These scenes enact visions from prophetic traditions that depict God's future reign and the completion of God's purposes as a feast of abundant, good-quality food. Isaiah envisions a feast on Mount Zion (the mountain in Matt. 15:29) "of rich food, a feast of well-aged wines, of rich food filled with marrow" (Isa. 25:6-10; Ezek. 34:25-31) "for all peoples." Apocalyptic writers contemporary with Matthew such as 4 Ezra 7 - 8 , 2 Baruch 7 2 - 7 3 , and Apocalypse of Abraham 21 similarly envisage the completion of God's purposes in establishing a world of abundant fertility. Such visions of God's future work, like Matthew's, show Rome's propaganda claims that it had already created a world of abundance and plenty to be false and presumptuous. God's justice-bringing work will reverse the inadequate food supply that marks Rome's empire. The establishment of God's reign in a new heaven and earth (Matt. 19:28; 24:35) will also restore access to land, necessary for food supply. Recalling the situation of Psalm 37, Jesus promises the suffering poor ("the m e e k " ) that God will overcome the oppressive powerful and rich. God will reverse the current injustice and the poor will inherit the land (Matt. 5:5 evoking Ps. 37). God's final victory will establish God's justice and replace malnutrition and i n a d e q u a t e food with a b u n d a n t food. Disciples 113 anticipate this future "meal of all meals" in the meantime by eating together in honor of Jesus "until that day" (26:26-29; cf. 8:11-12). sionby accepting the loving actions of a "sinful" woman at a meal in a Pharisee's house. He extends God's inclusive forgiveness to her (7:36-50). Chapter 14 builds a collage of meal scenes to depict God's transforming empire or reign manifested in Jesus' ministry. In 14:1-6, at a meal at a Pharisee's house on a Sabbath, Jesus extends God's merciful power to a sick man and heals him. In 14:7-11, Jesus attacks the elite's social honor code by criticizing the practice of seating people according to their social status. His attack rejects a foundation of imperial society, namely social stratification and hierarchy. In 14:12-14, he attacks the practice of reciprocity. This practice formed part of the patronage system, which understood "gifts" or favors or invitations to obligate people to reciprocate in equal and appropriate ways. The practice reinforced divisions between elites and nonelites because nonelites lacked resources to reciprocate equally. Instead they were obligated to provide services and goods. Jesus proposes a different pattern of social interaction, one that rejects insider-outsider/privileged-powerless boundaries. Instead it values generosity and inclusion, especially of the socially despised (14:13). In this way, social interaction embodies and imitates God's purposes. In 14:15-24, Jesus reinforces the point by telling the parable of the "great dinner." The host includes nonelites and the destitute who cannot reciprocate and whose presence constitutes a different social interaction. In these meal scenes, Jesus counters basic social patterns of imperial society that were encapsulated in meal etiquette: valuing social status, stratification, hierarchy; reciprocity; elite/nonelite boundaries; and exclusion. He verbalizes and enacts alternative societal patterns of inclusion and disregard for social status. He attests the inclusive hospitality that Luke's audience is to practice in continuing his mission. as cereals, olives, wine, and legumes supply energy, protein, vitamins B and E, calcium, and iron. However, numerous factors such as limited quantities of food, inferior quality, and uneven supplies reduced its actual healthfulness, resulting in widespread malnutrition. Malnutrition was evident in diseases of deficiency and of infection. Deficiency diseases included painful bladder stones from lack of animal products, eye diseases from vitamin A deficiency and diets low in animal-derived products and green vegetables, and rickets or limb deformity from vitamin D deficiency. Malnutrition also renders people more vulnerable to infectious diseases such as malaria, diarrhea, and dysentery. High population densities in cities; poverty; inadequate sewage and garbage disposal; limited sanitation; inadequate water supply distribution and unhygienic storage in cisterns; transmission of diseases in public baths; the presence of animals and feces; flies, mosquitoes, and other insects; and ineffective medical intervention ensured widespread infection. Swollen eyes, skin rashes, and lost limbs were common, as were cholera, typhus, and the plague bacillus. Meningitis, measles, mumps, scarlet fever, and smallpox affected many, causing deafness and blindness. Not surprisingly, mortality rates were high and age spans short. Up to 50 percent of children died by age ten. Child-raising practices such as denying protein-rich, infection-fighting colostrum to newborns and early weaning onto nutritionally inadequate foods contributed to high infant mortality rates. Swaddling contributed to limb deformation. Estimates of age spans suggest that while elites could live into their sixties or seventies, the life span of nonelites was much shorter, often around thirty. The Gospels depict the consequences of a world in which the food supply is precarious and its nutritional quality poor. The sick and physically damaged pervade the Gospels. John's Gospel features three stories in which Jesus brings healing to the official's son (John 4:46-54, fever), the crippled man (5:1-18), and the blind man (chap. 9). These healing stories attest to lengthy suffering and the poor quality of people's lives. The crippled man, for example, has been ill for thirty-eight years, has no caring support, and exists with many others who are blind, lame, and paralyzed (5:3-5). The blind man is a beggar (9:8) and seems distanced from his parents (9:19-23). Healing means not only restored health but a new life and social experience. 116 Matthew includes numerous summary passages referring to many diseases and healings (Matt. 4:23-25; 9:35; 11:4-5; 12:15-17; 14:34-36; 15:29-31; 21:14). He also personalizes the suffering and t r a n s f o r m a t i o n , w i t h i n d i v i d u a l h e a l i n g s c e n e s , e i t h e r in sequences (chaps. 8 - 9 ; 12:9-14, 22), or alone (15:21-28; 17:14-20; 20:29-34). Among the specified diseases are contagious leprosy (8:1-4; 11:5), as well as blindness (9:27-31; 11:5; 12:22; 15:30-31; 20:30; 21:14), pains (4:24), and various deformities and paralysis (4:24; 8:6; 9:2; 11:5; 12:9-14; 15:30; 21:14). Also to be noted are studies that link imperial and oppressive contexts with psychosomatic illness and demonic possession. Scholars have observed the prevalence of physical symptoms of pains, menstrual disorders, muteness, muscular rigidity, and paralysis in contexts of exploitation and trauma. In Matthew's Gospel a centurion has a paralyzed slave (8:6; son?), and there are other paralyzed folk (4:24; 9:2, 6), along with the hemorrhaging woman (9:20-22), the shriveled up (12:10), and the m u t e / d e a f (9:32-33; 11:5; 12:22; 15:30-31). Matthew also refers to numerous demoniacs (4:24; 8:16,28,33; 9:32; 12:22; 15:22). Jesus' healings and exorcisms are direct confrontations with the effects of Roman rule. In his exorcisms he engages the demonic power "behind the throne" (4:8) and overcomes it (8:26-34). In giving n e w life to demoniacs and the sick, Jesus rolls back the destructive impact of the empire. He asserts God's life-giving purposes (Matt. 11:2-6; 12:15-21), and manifests God's empire or reign (12:28). In so doing he anticipates the final establishment of God's purposes that will be marked not only by abundance and fertility (see page 113), but also by physical wholeness. Healing accompanies the feedings (14:14; 15:29-31). The prophet Isaiah envisions the establishment of God's reign as a time that reverses the physical damage to persons caused by empires. "The eyes of the blind shall be opened, and the ears of the deaf unstopped; then the lame shall leap like a deer, and the tongue of the speechless sing for joy" (Isa. 35:5-6). Matthew quotes this passage in 11:2-6 to interpret Jesus' healings as signifying the presence of God's empire in the midst of Rome's and anticipating its future establishment. Followers of Jesus continue this healing work that signifies the presence of God's empire and anticipates its future establishment. Paul reminds the Corinthians that he worked "signs and wonders 117 and mighty works" among them (2 Cor. 12:12). Acts narrates healings performed by Peter and John (Acts 3:1-10), the apostles (5:12), Philip (8:6-8), Paul and Barnabas (14:3), and Paul (19:11-12). Matthew's Jesus commands disciples, "Cure the sick, raise the dead, cleanse the lepers, cast out demons" (Matt. 10:8). James exhorts the sick to call the elders for anointing with oil and prayer (James 5:14-15). Revelation places this healing activity in the context of the completion of God's purposes. In the new Jerusalem that replaces condemned Babylon/Rome, there will be no more death, mourning, crying, and pain (Rev. 21:4). God will completely reverse the sickening impact of Rome's empire, replacing sickness and disease with new life, abundance, and wholeness. ConclusionWealth or its lack, food or its lack, and health or its lack comprise three everyday expressions of the Roman imperial system. In this chapter I have observed a number of ways in which some New Testament writers negotiated these everyday realities. 118 CHAPTER 8 he hierarchical social interactions and exploitative structures of the Roman Empire fostered social resentment, anger, and hostility. There were no democratic processes of reform. Instead, New Testament writers offer, as we have seen, various ways of negotiating R o m e ' s empire. We have analyzed this diverse negotiation as it involves the empire's hierarchical structure (chapter 1); different evaluations of the empire (chapter 2); ruling faces of the empire (chapter 3); places of the empire, including city, countryside, and temples (chapters 4-5); imperial theology (chapter 6); economics, food, and sickness and healing (chapter 7). This c o n c l u d i n g chapter looks further at s o m e d y n a m i c s involved in resisting Rome's rule. We have observed that often accommodation and resistance coexist. But resistance takes different forms. It can be violent and nonviolent, hidden and open, directly confrontational or more concerned with the distinctive practices and theology of an alternative community. In this chapter I will discuss three expressions of resistance: imagining Rome's 119 violent overthrow, employing disguised and ambiguous protest, and using flattery. Punitive RhetoricOne scenario directs punitive rhetoric against elites, promising their destruction and the reversal of the social order. Paul repeate d l y d e c l a r e s the end of R o m e ' s r u l i n g a u t h o r i t i e s . In 1 Corinthians 15:24 he proclaims that Christ will "destroy every ruler and every authority and power" when God's empire is established. Paul's statements are "in-house" and hidden in that they are 120 directed to believers and are not for a public audience. The Gospels also present Jesus speaking publicly to and about elites. Jesus utters words of indignation against them, negating and countering their values and self-benefiting societal structures. Luke's Jesus, for example, denounces in a series of woes or prophetic judgments those who are rich and full, promising that God will reverse the status quo (Luke 6:24-25). Subsequently, he condemns the Jerusalem rulers with a series of woes in 11:37-54 directed against their "greed and wickedness" (11:39). They will not enter G o d ' s reign (13:28-29), and their center of power, Jerusalem, will be destroyed (19:41-44). Mark's Jesus identifies Rome and the military power of its legions as demonic (Mark 5:1-20). He anticipates God's powerful destruction of Rome by casting the demons called Legion into the sea (5:9-13). In Mark 7:1-13, he condemns the temple-based Jerusalem leaders for "abandoning] the commandment of God" (7:8) and "making void the word of God" (7:13). He tells peasants in Galilean villages not to support the temple because offerings to God that deprived the vulnerable elderly of support violated the command to honor parents. Jesus' condemnation of the leaders means their inevitable punishment. In 8:15 he warns people to beware of the leaders as "yeast," a reference to their corrupting evil that results from their rejection of God's purposes. In all the Gospels, Jesus' attack on the temple (see the discussion in chapter 5, above) is an open and public confrontation with the leaders. He punctures the societal order by enacting its judgment for failing to enact God's purposes. For this open and direct challenge, he dies. John the Baptist also directly confronts and verbally condemns the Jerusalem leadership, telling them that "even now the ax is lying at the root of the trees" (Matt. 3:7-10). Prophets used the ax image to denote G o d ' s j u d g m e n t on and destruction of the Assyrian (Isa. 10:33-34; Ezek. 31) and Babylonian Empires (Dan. 4:9-27). But the ax was also a symbol of Roman authority. It was part of the fasces, a bundle of rods and ax paraded by Roman rulers as an intimidating symbol of Rome's power to ensure submission by beating and decapitating. John the Baptist turns the image back on Rome's provincial allies as a symbol of God's order that will destroy Rome and its allies. He adds a further image of 121 judgment by declaring that not only will the trees be cut down, they will be burned. Rome had burned Jerusalem in 70 CE by destroying the city. John's promise is that God will destroy Rome. Herod, Rome's puppet, ensures John diesby beheading him (Matt. 14:8-12). Imagined DestructionOther scenarios imagine the violent judgment that follows from J e s u s ' return, and precedes the transformation of the world through the full establishment of God's empire. Those who are "ashamed" of Jesus and his words now will be condemned at his return (Mark 8:38). Those who deny Jesus now will be denied (condemned) by him then (Luke 12:8). Matthew envisions Rome's overthrow in 24:27-31 at the return of Jesus. The first part of Matthew 24 emphasizes that discipleship in the time before Jesus' return is marked by increasing turmoil and requires faithful endurance and mission (24:12-13). But when it happens, Jesus' "coming" will be spectacular like flashes of lightning (24:27). Lightning often denotes God's power and presence (Exod. 19:16). But it is also associated with Jupiter and signals either the favor or disfavor of the gods for Rome expressed in earthly events such as battles and accession to power. This ambivalent reference to lightning indicates that J e s u s ' return means a clash of powers. God's sovereignty collides with Rome's. Jesus' return is called a "coming" (Matt. 24:3, 27). The Greek noun parousia similarly indicates a collision of powers. It denotes God's powerful presence as well as the approach of a Roman emperor, general, or governor to a city where he is received with honor and deference. Jesus is named in 24:27 as the "Son of Man." This description evokes the figure in Daniel 7 who acts on God's behalf, destroys all human empires, and establishes God's "everlasting dominion" (Dan. 7:14). Verse 28 describes the destruction from Jesus' coming as Son of Man. "Wherever the corpse is, there the eagles will be gathered" (literal translation). Eagles (not the mistaken translation, "vultures") represent imperial powers subject to God's purposes (Deut. 28:49). The eagle was of course the symbol of the Roman Empire. Soldiers carried eagle images into battle. In verse 28, the eagles, symbols of Roman military power, are gathered with 122 corpses. Both the eagle images arid soldiers are destroyed in the final battle against Jesus' forces. Other Jewish texts from around Matthew's time depict a final battle between God's forces and Rome, with Rome being destroyed (4 Ezra 11-13; 2 Baruch 39-40; Qumran's War Scroll). God's victory is accompanied in verse 29 by cosmic signs. The sun darkens, as does the moon, and stars fall from heaven. Some understood these "powers of heaven" as solar, lunar, and astral deities that blessed and guided Rome. God's victory at Jesus' coming extinguishes these cosmic deities in judgment and reasserts God's sovereignty over this part of God's creation (Gen. 1). Jesus' coming is "lights-out" time for Rome. A sign or military emblem appears in the heavens announcing God's victory and the establishment of God's empire or reign (24:30). Jesus sends out angels, who with a trumpet calla conventional call to battlegather G o d ' s people. G o d ' s empire destroys Rome and establishes God's purposes. There is an interesting paradox in this scene of imagined destruction. It presents Rome's downfall as very public and cosmic. But the vision is covert and hidden. It is not public knowledge or made known to Rome. Only those who are committed to Jesus and hear the "in-house" gospel know it. The vision functions not to warn Rome, but to assure Jesus' followers of justice and transformation at God's violent and cosmic intervention. It sustains them in living out faithfully their alternative community and understanding of life. It is not surprising that the New Testament writings envision Rome's violent punishment, reversal of societal order, and powerful overthrow. These writings are influenced by the cultural circumstances of military violence and subjugation in which they originate. The values and practices of those who dominate shape the oppressed. Oppressed peoples absorb the cultural ethos, which constantly models violent power as the means to a very desirable end. Accordingly, they want to be in charge. They want what they hate. As we saw in chapters 1 and 2, numerous studies have shown that in their "hidden transcripts" oppressed people often imagine violent revenge on their oppressors and a reversal of roles. They become like that which they oppose. They envision themselves exercising power and enjoying wealth and status. That 123 is, their fantasies of reversal and revenge often imitate their oppressors both in meansthe use of violent and overwhelming powerand in outcomethe gaining of wealth, power, and status. This tension reflects their hybrid existence, caught up in the intersection between the culture of the oppressed and their own culture as the oppressed. seals, described in 6:1-8, release conquest, war, economic exploitation, famine, and pestilence. In one sense these disasters depict God's violent punishment. Yet the violent destruction does not come about because God intervenes. Conquest, war, economic exploitation, and famine are expressions and consequences of empire. Military power was foundational for Rome's empire. Economic exploitation was the elite's mode of life. Famine and disease were inevitable consequences (see chapter 7, above). These are everyday imperial realities well known to the empire's inhabitants. They are presented as God's judgment. God's judgment, already taking place, is less about angry thunderbolts than it is about a permissive stance toward the world. God allows Rome to experience the consequences of its own rule. It seems justice, rather than revenge, is operative. Yet revenge is not far away because God's permissive justice means injustice for many. There is tragic fallout from God's nonaction or permissive stance. Not only is there general suffering, the fifth seal reveals martyrs killed by empire (6:9-11). They cry out to God, "How long will it be before you judge and avenge our blood?" The sixth seal reveals a cosmic catastrophe as the world falls apart (6:12-17). This is the "wrath of the L a m b " at work (6:16-17). A second factor qualifies the desire for revenge. In chapters 8 and 9, seven trumpets blow in sequence. As each of the first four trumpets blows, terrible disasters happen on earth and throughout the cosmos (8:6-12). These disasters echo the ten plagues that afflicted Pharaoh before he let the enslaved Israelites depart from Egypt (Exod. 7-12). A third of the earth is destroyed, along with a mountain and a third of the rivers, and a third of the sun, moon, and stars. The fifth trumpet releases a hoard of locusts or scorpionlike creatures from the underworld that attack people opposed to God (9:1-11). The sixth trumpet produces an attack that kills a third of the population. As with the first four seals, this attack is a consequence of empire. Empires always elicit challengers. The attackers resemble Rome's archrival, the Parthian Empire (9:13-19). What is significant about this sequence of trumpet-released disasters is its limited extent and purpose. The disasters destroy not everything, but a third of the targets. Mercy tempers the destruction. 125 Verses 20 and 21 make very clear that this violence is to be understood not as revenge or judgment but as a merciful warning. It is intended to bring about repentance. So the first trumpet destroys not the whole earth, but only (!) a third (8:7). The locusts/scorpions are allowed to torture but not kill (9:5). They can target only those opposed to God's purposes (9:4). The desire for violent puni s h m e n t is m o d i f i e d b y a merciful c h a n c e for r e p e n t a n c e . However, verses 20 and 21 recognize that the attempt fails. There is a third qualification to the desire for violent revenge that involves God's use of life-giving power. The central figure in God's purposes is "a Lamb standing as if it had been slaughtered" (5:6). In the previous verse, one of the elders invites the seer, "See, the Lion of the tribe of Judah, the Root of David" (5:5). But in an amazing juxtaposition, what he is invited to see differs greatly from what he actually sees. In 5:6, the powerful conquering Lion, king of the beasts, turns out to be a "Lamb standing." Rather than causing suffering by exerting great power, the Lamb appears to have suffered at the hands of power. Instead of slaughtering, it has been slaughtered, a verb that commonly represents imperial violence in Revelation (6:4, 9). The agent of God's purposes has been a victim of imperial violence (crucifixion) but he has not been destroyed. He stands, a reference to Jesus' resurrection, and is in the heavens (his ascension or vindication by God). God has outpowered and triumphed over Rome, but has done so not with an act of violence but with a powerful act of giving life to one who was broken and killed. God's way of working is an alternative to Rome's methods. But, fourth, if the Lamb manifests God's life-giving purpose, what about the battle scene in 19:17-21? Again the violent fantasy is qualified, here by the choice of weapons. The Lamb, or in chapter 19 the rider on the white horse (19:11-21), fights not with literal weapons but with a sword that comes from his mouth (19:15). He wears a robe stained not with the blood of others but with his own blood given for others (19:13). He is identified as the "Word of God" (19:13) who reveals or communicates God's purposes. He captures the beast and his prophet (19:20) and kills the rest with his sword, but it is "the sword that came from his mouth" (19:21). God does not imitate imperial military violence to achieve victory but accomplishes it through revealing, persuading, and judging words. 126 A fifth qualification of the fantasy of violent revenge emerges in another cycle of judgments in chapters 15 and 16. After sequences of opened seals (chap. 6, consequences of empire) and blown trumpets (chaps. 8 - 9 , warnings to urge repentance), this third sequence in chapters 15 and 16 expresses God's judgment through seven bowls or plagues (again echoing the Exodus plagues). These bowls or plagues express the "wrath of God" (16:1, 19) against those who "did not repent of their [evil] deeds" (16:9, 11), a reference back to the trumpets of chapters 8 and 9 and 9:20-21 in particular. There is a final battle (16:12-16) and great destruction only after people have been given the chance to repent. An angel declares God's "judgments are true and just," and celebrates the revenge: "It is what they deserve!" (16:5-7). Yet, sixth, juxtaposed to this violence and destruction is another thread, one of salvation, transformation, and inclusion. The two chapters begin with a hymn of praise (15:3-4). The hymn does not gloat over enemies destroyed. It does not mock the vanquished. Rather, it praises God for being "just and true" and "king of the nations." And instead of celebrating judgment and destruction, it declares, "All nations will come and worship before you" (15:4). The overarching agenda seems to be salvation, not vengeful destruction. A similar emphasis occurs at the end of the book in 21:24-26. Even though the nations are supposedly destroyed (19:15), they come to live by the light (saving presence) of God that shines from the new Jerusalem. The "kings of the earth" are destroyed in 19:19-21 as God's opponents, yet they are drawn to this light and to the city's ever open gates (21:24-25). Revelation's vision ends with healing for the nations (22:2). A vision of transformation and inclusion for all people affected through God's life-giving power completes the book. We should also note as a seventh qualification that Revelation insists that humans do not use violence to attack the empire. Followers of the "Lamb standing as if it had been slaughtered" (5:6), who live in the seven churches of the province of Asia where economic, civic, and religious participation in the empire was woven into the fabric of everyday life, are to employ the same means of resistance as the Lamb. They negotiate Rome's world by bearing "faithful witness" (1:5), refusing to compromise, and by 127 coming "out from her" (18:4). Their faithfulness means social and economic hardship, even suffering martyrdom, as the consequence of their nonviolent faithfulness, if necessary. These faithful witnesses comprise the army of God and the Lamb (chap. 7). They gain victory not by violence, not by causing suffering to others, but by faithfulness (7:14-17). Their martyrdom results from active but nonviolent resistance that refuses to be intimidated by the empire's violence and denies it the power to determine their loyalties. Such imaginings of violent and cosmic overthrow, reversal, and punishment of Rome sustained, and were sustained among, powerless groups of followers of Jesus. Committed to God's purposes manifested in Jesus, these groups envision the day when their alternative practices and social interactions replace R o m e ' s oppressive system through God's intervention. These imaginings are the in-group protests of an alternative community, directed against Rome but not made public or expressed openly to Rome. They speak "truth about power" rather than "truth to power." They do, though, undergird practices of direct and open (nonviolent) confrontation with imperial officials and with neighbors and fellow guild or artisan group members, such as not sacrificing to the emperor and costly withdrawal from economic and social participation in the empire. this desire by allowing limited expression. Jesus acts within what is sanctioned. Everything seems to be under control. Jesus enters Jerusalem from the Mount of Olives. This might appear to be an ordinary everyday place, but for those in the know it has great significance. It is the place from which God will exercise the final judgment and salvation, according to Zechariah 14. Jesus evokes dangerous traditions that the Jerusalem rulers do not emphasize and the Roman rulers probably do not know. He appeals to an alternative or "little" tradition over against more acceptable traditions that sanction the status quo (often called "Great Traditions"). He exploits the ambiguity of the ordinary and the special. He disguises his resisting message in this ambiguity. A similar thing happens with the entry itself. There was a set protocol for Roman officials, such as emperors, generals, or governors, entering a city. It involved the elite escorting the official into the city, welcoming crowds, hymns, speeches of welcome, and offering sacrifice in the city temple. This protocol acknowledged and submitted to Roman greatness in terms of its power and ability to dominate. Most of these elements appear in Jesus' entrance scene. Jesus processes. Welcoming crowds celebrate. They recite a hymn (Psalm 118). He goes into the temple, though, to announce judgment on its death-bringing impact. Absent is any welcome from the elites. Jesus is not recognized for his greatness and power to dominate. In fact, the scene is preceded by Jesus' declaration that he has come to serve not be served (20:28) and by a demonstration of his lifegiving service in the healing of two blind men (20:29-34). His entry emphasizes a way of life and practices antithetical to Rome's. Moreover, about half of the scene involves procuring the donkey. An entering general or emperor would ride a warhorse or chariot, not an everyday lowly beast of burden like a donkey. The donkey was a common symbol of Gentile derision and scorn toward Jews. Some Gentile writers claimed that Jews worshiped a donkey's head in the temple. Jesus identifies with the underdonkey. He appears to be a common poor peasant riding an everyday beast just as others probably did. Yet the donkey was ambiguous. For those in the know, the donkey had a much greater significance. It is the animal that God is to ride into Jerusalem in Zechariah 9 when God defeats those who, like Rome, resist God's purposes, and that establishes God's reign 129 in full. Mark leaves this significance of the donkey unstated and disguised, assuming his audience will know this inside information (Mark 11:7). Matthew spells it out. He inserts in Matthew 21:4-5 a quote from Zechariah 9:9, which refers to God's entry to Jerusalem, to make sure we understand the significance of Jesus' ambiguous and hidden action. Similarly, ambiguity surrounds the psalm that the crowds recite. They shout (with slightly different wording in Mark and Matthew), "Hosanna to the son of David. Blessed is the one who comes in the name of the Lord." They recite from Psalm 118 to celebrate God's saving of the people by victory over the nations. It was an especially appropriate and common psalm for reciting at Passover in celebrating freedom from Egypt. But in relation to Jesus the psalm takes on new significance for those in the know. It does not just look back to the past. It joins with the references to Zechariah 9 through 14 concerning the "Mount of Olives" and the "donkey" to anticipate God's final salvation and victory over opposing forces such as Rome. The psalm has meaning hidden from the ruling powers but known to J e s u s ' followers. What appears to be a permitted festival celebration takes on hidden, subversive, threatening significance. We can think of Jesus' entrance as a kind of street theater or acted parable. Jesus uses ambiguity to enact a message that restores dignity and offers hope to those suffering under Rome's rule. His actions are, for those in the know, extremely threatening to Rome, yet on the outside they appear to be nothing more than participation in the permitted festival activities. He exploits the festival occasion to both conceal and to reveal God's purposes, avoiding direct confrontation with the rulers while engaging in a hidden protest. When he enters the temple in the next scene, though, he takes a very different approach. In overturning tables and citing prophets like Jeremiah, he openly and directly confronts the temple establishment with its vast economic, political, religious, and societal power (see chapter 5, above). He forcefully punctures the ruling order to expose its exploitation. For this attack, he dies. the powerless have no access to political power. But they restore dignity, anticipate another way of life, encourage initiative, and protest the injustice of the present. We will briefly look at two incidents. inner garment means stripping himself naked in court. It symbolizes the stripping away of property and dignity. It exposes, among other things, the basic humanity of the poor as well as the powerful person's heartless demand. Jesus' third demand concerns R o m e ' s military power. The "force" involves Rome's right to requisition labor, transport, and lodging from subject people (see Matt. 27:32). Jesus' instruction to carry the soldier's pack an extra mile appears initially to be compliance. But like the previous two examples, it is an active strategy for refusing to be humiliated by claiming initiative and asserting one's dignity to the discomfort of the oppressor. Going the second mile is a surprising action. It refuses to deal with Rome only in its terms. It reconfigures the power arrangement. The oppressed has decided the action, not the soldier. It places the soldier off guard and out of control, wondering if the provincial is being helpful or mischievous, and wondering if he will be reported for making the provincial do two miles. Jesus' fourth example (5:42) concerns not refusing those who beg. The action focuses not on wealth but on doing justice. The four examples provide imaginative, creative, active strategies for unsettling the power arrangements, restoring dignity, and breaking the c y c l e of v i o l e n c e . T h e y a s s e r t i n i t i a t i v e , dignity, and humanity in ambiguous ways over against injustice and oppression. Paying TaxesJesus gives another instruction about paying taxes that both hides yet expresses resistance. In Matthew 17:24-27, Jesus instructs Peter to pay the half-shekel tax with a coin found in a fish's mouth. The tax under discussion was paid, prior to 70 CE, to the J e r u s a l e m temple. But after J e r u s a l e m ' s defeat in 7 0 , w h e n Matthew's Gospel was written, the emperor Vespasian co-opted it as a punitive tax on Jews. He used it, insultingly, to rebuild and maintain the temple of Jupiter Capitolinus in Rome, thereby reminding Jews not only of Rome's superior power, but also of Jupiter's superiority to the God of Israel. Paying the tax was humiliating. Jesus' conversation with Peter about paying the tax reframes its significance. Jesus reminds Peter in verses 25 and 26 of the well132 known taxing ways of kings and emperors. Everyone pays taxes except the rulers' children. Not paying the tax is not an option because it will bring reprisals (17:27a). Instead, Jesus instructs Peter to catch a fish and find there the coin to pay the tax. The key to understanding Jesus' instruction is found in the Gospel's previous scenes involving fish. Twice in chapters 14 and 15 Jesus has exerted God's sovereignty over fish, multiplying several small fish to feed large crowds. Contrary to Rome's claims that the emperor rules the sea, owns all its creatures, and tightly controls and taxes the fishing industry, the Gospel asserts that the sea and its creatures belong to God and are subject to God's sovereignty (recall Gen. 1:9-13, 20-23). God supplies the fish with the coin in its mouth. The coin expresses God's sovereignty. Disciples pay the tax. It appears to Rome that they are submissive and compliant. But for disciples the tax coin has a special significance. It testifies to God's sovereignty. The tax that is supposed to enact and acknowledge Rome's control has been reframed. Unbeknown to Rome, but known to Jesus' followers, it bears witness to God's reign. Paying the tax is an ambiguous act, an expression of hidden protest. does he exhort "fear" when he has said that those who subject themselves do not fear because rulers reward good behavior (13:3)? The cracks suggest that Paul knows that this is not the whole story. He does not engage the complexity of the issue and leaves important questions unaddressed. He does not consider, for example, governing authorities who do not carry out God's will, who oppose God's purposes, and who do not do justice "for your good." Further, this flattering exhortation to submission in 13:1-7 does not cohere well with what Paul has said previously in Romans. In 1:18-32 he described the hostile and corrupt Gentile world subject to G o d ' s w r a t h . B u t h e r e the g o v e r n i n g a u t h o r i t i e s s e e m untouched by "the present evil age" and are designated agents of, not subjects of, God's wrath. In 8:18-25, Paul's eschatological perspective emphasizes that God will end this present evil age and establish God's purposes; 13:11-12 repeat this emphasis. But it is not mentioned in 13:1-7. In 12:2 he told them not to "be conformed to this world," but now he urges subjection to its ruling authorities. In 12:2b he told the believers to "discern what is the will of God," but he does not include discernment in 13:1-7. In 12:14-21 he recognized inevitable conflict with neighbors and that hostility can meet doing good (living out God's purposes), yet in 13:3 he (naively?) claims that rulers recognize and reward good behavior. In 12:17-21 he declares that God punishes evil, yet in 13:4 he identifies the governing authorities as agents of God's punishment. Paul's own experiences indicate that flattery is not his only way of negotiating Rome's official representatives and it is frequently inappropriate. He has experienced imprisonments and beatings (2 Cor. 11:23, 25), the latter referring to a punishment inflicted by Roman officials (in contrast to synagogue officials in verse 24). Acts presents Paul as being beaten and imprisoned in Philippi (Acts 16:23-40), and nearly tortured by whipping in Acts 22:24-29. He recognizes that the ruling authorities oppose God's purposes and are under God's judgment (1 Cor. 2:6-8; 1 Thess. 5:3). He knows that believers cannot give their ultimate loyalty or nondiscerning submission to the empire because for believers there is one Lord. Paul reminded the Roman believers of their commitment in Romans 10:9 (cf. 1 Cor 8:5-6; 12:3). Paul's theological thinking, shaped by eschatological and chris134 tological perspectives, sets Roman power in the context of God's greater triumphant purposes. Paul's eschatological convictions concern his confidence that God will end this present world and age, and establish God's just and life-giving purposes in full (see chapter 6, above). Believers are defined not by belonging to Rome's empire, but to God's purposes. Their ultimate political loyalty and homeland is with God. In discussing Philippians we saw that he sharply distinguishes the believers' identity from Roman claims about citizenship and the emperor as savior (see chapter 4, above). "Owr citizenship is in heaven, and it is from there that we are expecting a Savior" (emphasis added; Phil. 3:20). Paul's christological convictions center on Christ crucified (1 Cor. 2:2). Rome used crucifixion as the torture and death penalty for low-status rebels (against Rome's control); robbers (who attacked elite property); rebellious slaves (essential elite labor and property); and others that threatened the Roman order. Rome crucified provincials but not citizens, except for treason. Crucifixion thus defended elite structures and values. It was performed publicly to deter, intimidate, and coerce. Jesus had been crucified as a kingly pretender whose message and actions threatened the Roman status quo. To proclaim "Christ crucified" as Paul did was to a n n o u n c e a p o l i t i c a l l y t h r e a t e n i n g m e s s a g e . But it also announced the limits of Roman power since Paul proclaimed Jesus as raised from the dead (Rom. 10:9; 1 Cor. 15). Rome's power could not prevent the imminent establishment of God's purposes. These factors suggest that in Romans 13:1-7 Paul is choosing to present Rome in a flattering way because of some particular circumstances the church was experiencing. Scholars have made numerous guesses about the situation that might warrant such a flattering presentation. Perhaps some believers saw themselves as agents of God's judgment against the empire. Paul warns against violent action. Some have suggested that about ten years later the fire of 64 CE may have resulted from Christians who saw themselves hastening the day of judgment. If this is correct (and we do not know that it is), they would exhibit the sort of action Paul rejects here. Clearly they would have ignored Paul's instruction. Perhaps in the midst of widespread disgruntlement in the 50s 135 with the emperor Nero's harsh taxes (Tacitus, Ann. 13.50-51), Paul warns believers against not paying taxes. A refusal might provoke reprisals against Rome's Jewish community, including J e w i s h b e l i e v e r s , already v u l n e r a b l e to a n t i - J e w i s h sentiments. Perhaps, since Paul plans to visit Rome (1:11-15; 15:22-29), he thinks it necessary to defend himself against perceptions that his gospel about the establishment of God's purposes encourages disloyal actions against Rome. Perhaps Paul fears that Jewish Christians might inappropriately support growing tensions between Judeans and Rome, causing problems for the Christian communities in Rome. All of these possibilities are guesses because Paul does not identify the situation he addresses. The guesses recognize, though, that these verses relate to a specific situation for which his flattering rhetoric is appropriate. Paul's instructions in Romans 13:1-7 are not a full description of his understanding of Rome's empire as we have seen. Although he exhorts subjection, he elsewhere recognizes the greater claim of God's purposes for believers, God's inevitable and imminent triumph, a world and rulers under judgment, the alternative practices of communities of believers, and the inevitable difficulties believers will experience in being faithful to God's purposes. These verses do not comprise a political treatise that presents a fixed ethic of submission for every situation. ConclusionIn this c h a p t e r w e h a v e e x p l o r e d s o m e of the d y n a m i c s involved in resisting Rome's rule. Often accommodation and resistance coexist. Resistance takes different forms. It can be violent and nonviolent, hidden and open, directly confrontational or more concerned with the distinctive practices and theology of an alternative community. It can imagine Rome's violent overthrow, employ disguised and a m b i g u o u s protest, and use flattery. Throughout this book we have observed some of the complex and diverse ways that the New Testament writings negotiate the Roman imperial world. 136 Postscript t is virtually impossible to engage a topic such as the one discussed in this book without thinking about events in our own world. The term empire has been widely used to describe the role and actions of the United States in extending its power throughout the world by various means. It has also been used to describe multinational corporations that extend their enormous reach across the globe. In some respects our experience of empire is very different from the first century. We have different political systems. We can participate in elections, write letters to elected officials, and campaign for a particular candidate. We know a different history that has struggled for basic human rights. Yet our world is strangely similar. We know the importance of military power to securing influence. We know a world in which a relatively small percentage of the very rich control and consume much more than their share of the wealth. We know that government lies often in the hands of the elite who make decisions often to secure their own interests. We know a world in which "spin" is very much the name of the game. How might Christianswhether they live at the center of the world's most powerful empire ever, or who know the impact, reach, and power of such empiresengage these realities? Does our discussion of New Testament texts offer any help? 137 To engage such an important issue that confronts contemporary communities of faith and citizens of the global village requires much thought, conversation, and engagement with books that specifically focus on this contemporary question. That task is far beyond the span of this book. Here I will make six brief comments as a small contribution toward much more extensive conversation and inquiry. I readily recognize that the issues are much more complex and need much more attention than my all-too-brief remarks here. strategy. They offer diverse perspectives on Rome's empire, and various strategies for engaging it. The strategies stretch from demonizing it, anticipating God's judgment on it, and opposing it with defiant (but self-protective) nonviolent actions, to praying for it, submitting to it, and imitating it. If we like things clear and simple, this diversity of strategies is very frustrating. If we want a single formula to fit all situations, we won't find it in the New Testament. This complexity, though, gives us pause. These writings from early Christians show how difficult it is to live in/with/under/against empires. My choice of four prepositions in this last sentence alone hints at the diverse negotiation that is needed to be faithful. ("God bless America"). They cannot tolerate dissent. For Christians these claims raise profound questions. Jesus' teaching points his followers to love for God as their supreme allegiance. That love is to be expressed in love for neighbor. And Christians know that Jesus himself challenged and collided with empire and was crucified by an empire. That is, empires raise questions of allegiance (To whom does the world belong?); identity (Who are we?); community (What sort of world do we want to inhabit? Who is our neighbor and how do we treat them?); and power (Who exercises it and to what end? Who benefits? Who pays?). These are big and difficult questions for which there are no simple answers. But they are questions that lead us into the heart of living out Christian claims. The New Testament writings set them on our agenda and invite us to wrestle with them also. instruction as though it were the only stance followers of Jesus are to exhibit toward the government. Come what may, so the argument goes, Christians must obey. This view encourages a willing submission, a quick trust, and an unquestioning acceptance of government policies and decisions. Often Romans 13 is understood to mean that God has ordained whatever the government does and so it is to be accepted, not resisted. One consequence of this is that maintaining the social order or cooperation with it is seen to be the most important thing. There is no denying that Romans 13 and 1 Peter 2 are part of the Christian scriptures. Whether Romans 13 offers such an allembracing and compliant approach to political matters is debatable, as I suggested in the last chapter. But one thing is not debatable. The New Testament writings do not offer only one strategy of compliance and submission to define how Christians might engage political matters. They do not endorse the current societal structure as unassailable. They do not make it sacred and untouchable as God-ordained. They do not endorse the status quo regardless of its wrongs. Some Christians have wrongly tried to assert such claims in the face of sinful realities such as slavery, or misogyny, or racism. The discussion in the previous chapters show that these early Christian writers willingly evaluated the Roman Empire and were not reluctant to declare it generally inconsistent with God's purposes. They do not urge blind submission to it. Instead, the discussion in the previous chapters shows that they frequently urged strategies of opposition and challenge, of contesting and subversion. Our New Testament writings challenge a "default position" of unswerving submission. The issue, of course, is to know when to employ which strategy. When is compliance and when is resistance appropriate? That process of discernment is difficult. It involves, I would suggest, much prayer, study, thought, and debate. embeddedness in the world of empire and a profound realization of their limited ability to influence it. Jesus uses metaphors of empire to describe G o d ' s w a y s of working. He talks of the "empire or kingdom of God." He announces the coming triumph of God in which God's ways violently and forcibly overcome Rome. Paul writes in a similar vein. He readily employs military metaphors to describe Christian existence. Sometimes empires, including God's empire, accomplish good things. Empires can be ambiguous and we cannot pretend that we are not often beneficiaries of the ways of empire. Again the issue is to know which strategy to employ. When is opposition and when is support appropriate? That process of discernment questions our default positions. invite and shape Christian communities to become places that embody God's purposes and that embody an alternative way of being human in the midst of the empire. These strategies of reconceptualization and of alternative social experiences and relationships result, in part, from the early Christians not having any access to power and no opportunity to make systemic changes. This, of course, is one major difference between our world and theirs. Just exactly what Jesus or Paul or Matthew or Mark or Luke or James would say to us is not immediately obvious. At least, our different situation raises important questions about how we use access to power, and what vision of society we promote through it. Do we promote purposes of exclusion and hate or of the inclusion of all people in God's life-giving purposes? It also brings the challenge that communities of faith carefully and faithfully discern God's purposes and embody them in their own living. What is involved in such discernment? Perhaps the topics of our eight chapters provide some guidelines for areas in which we need to do some thinking and discussing in order to develop appropriate strategies for faithful engagement. 1. The discussion of chapter 1 suggests that it would be important for us to understand the nature and structures of our contemporary empires. 2. The discussion of chapter 2 indicates our responsibility to evaluate our contemporary empires theologically in terms of God's declared purpose to bless all the families or nations of the earth (Gen. 12:1-3). How do they measure up to those purposes? 3. The discussion of chapter 3 urges us to discern the current faces of empire to identify aspects of life in which we need to negotiate the demands and claims of empire. One of the reasons for such discernment is to prevent us from taking for granted what we experience each day as though "it is just the way things are." 4. The discussion of chapter 4 turns our attention to the impact of empire on rural and urban life. How does it affect these areas? 5. The discussion of chapter 5 invites us to examine the roles of "religious" places, groups, and leaders in empires. What alliances exist with the empire, and are those alliances life giving or death bringing? 142 6. The discussion of chapter 6 presses this inquiry further to include the role of religious sanctions or theological rationales. In what ways are churches allies, agents, or willing partners with empire? How compromised are we? What theological claims are used to justify or to oppose empire? 7. The discussion of chapter 7 focuses on the ways in which economics, food supplies, and disease relate to imperial power. What is the relation between our capitalist quest for more and greater profit and imperial structures? Despite God's will for hungry people to be fed, hunger remains a significant problem in this country and the world. Access to medical care varies greatly throughout the world. 8. The discussion of chapter 8 invites us to think about appropriate ways of intervening to oppose and redress the destructive ways and impacts of empire when they do not measure up to God's purposes. Every one of these areas needs much more consideration and no doubt there are numerous other dimensions to be engaged as well. But they provide something of a framework for consideration and an agenda for ecclesial communities to pursue in forming alternative worldviews and communities that embody alternative, anti-imperial practices. 143 Bibliography Arlandson, James M. Women, Class, and Society in Early Christianity: Models from Luke-Acts. Peabody, MA: Hendrickson, 1997.Bauckham, Richard. "The Economic Critique of Rome in Revelation 18." Bibliography . "Matthew and the Gentiles: Individual Conversion a n d / o r Systemic Transformation/' JSNT 26 (2004): 259-82. De Vos, Craig Steven. Church and Community Conflicts: The Relationships of the Thessalonian, Corinthian, and Philippian Churches with TheirWider Civic Communities. SBLDS 168. Atlanta: Scholars Press,1999. Gamsey, Peter. Cities, Peasants, and Food in Classical Antiquity. Pages 22652. Cambridge: Cambridge University Press, 1988. Gamsey, Peter, and Richard Sailer. The Roman Empire: Economy, Society,and Culture. Berkeley: University of California Press, 1987. Gill, David, and Conrad Gempf, eds. The Book of Acts in Its Graeco-RomanSetting. Grand Rapids: Eerdmans, 1993. Herzog II, William R. Jesus, Justice, and the Reign of God. Louisville:Westminster John Knox Press, 2000. Horsley, Richard A. Jesus and Empire: The Kingdom of God and the NewWorld Disorder. Minneapolis: Fortress Press, 2003. Harrisburg, PA: Trinity Press International, 1997. , ed. Paul and Empire: Religion and Power in Roman Imperial Society. , ed. Paul and Politics: Ekklesia, Israel, Imperium, Interpretation.Harrisburg, PA: Trinity Press International, 2000. , ed. Paul and the Roman Imperial Order. Harrisburg, PA: TrinityPress International, 2004. Horsley, Richard A., and John S. Hanson. Bandits, Prophets & Messiahs: Howard-Brook, Wes, and Anthony Gwyther. Unveiling Empire: Reading Revelation Then and Now. Maryknoll, NY: Orbis, 2001.145 Huskinson, Janet, ed. Experiencing Rome: Culture, Identity and Power in theRoman Empire. London: Routledge Press, 2000. Myers, Ched. Binding the Strong Man: A Political Reading of Mark's Story ofJesus. Maryknoll, NY: Orbis, 1988. Neyrey, Jerome H., ed. The Social World of Luke-Acts. Peabody, MA:Hendrickson, 1991. Oakes, Peter, ed. Rome in the Bible and the Early Church. Grand Rapids:Baker, 2002. Price, S. R. F. Rituals and Power: The Roman Imperial Cult in Asia Minor.Cambridge: Cambridge University Press, 1984. Rohrbaugh, Richard L., ed. The Social Sciences and New TestamentInterpretation. Peabody, MA: Hendrickson, 1996. Scott, James. Domination and the Arts of Resistance. New Haven: YaleUniversity Press, 1990. Stegemann, Wolfgang et al., eds. The Social Setting of Jesus and the Gospels.Minneapolis: Fortress Press, 2002. New York: Crossroads, 1992. Tamez, Elsa. The Scandalous Message of James: Faith Without Works Is DeadVan Tilborg, Sjef. Reading John in Ephesus. Leiden: E. J. Brill, 1996. Wengst, Klaus. Pax Romana and the Peace of Jesus Christ. Philadelphia:Fortress Press, 1987. Whittaker, C. R. "The Poor." In The Romans, edited by Andrea Giardina, 272-99. Chicago: University of Chicago Press, 1993. Zanker, Paul. The Power of Images in the Age of Augustus. Ann Arbor:University of Michigan Press, 1988. 146 Augustus. "Res Gestae," or "Acts of Augustus." In W. Eck, Augustus. Oxford: Blackwell Publishing, 2003. Translated by Deborah Lucas Schneider; "Res Gestae/Acts of Augustus." Translated by S. A. Takacs. Cicero. "De provinciis consularibus." In Cicero. 28 volumes. Vol. 13. T r a n s l a t e d by R o b e r t G a r d n e r . L o e b C l a s s i c a l L i b r a r y . Cambridge, MA: Harvard University Press, 1958. Josephus. "Vita," "The Jewish W a r , " and "Jewish Antiquities." In Josephus. 13 volumes. Translated by H. St. J. Thackeray, Ralph Marcus, and Louis H. Feld. Loeb Classical Library. Cambridge, MA: Harvard University Press, 1926-63. Bibliography of Classical Works Cited Tertullian. "De Idololatria." In J. H. Waszink and J. C. M. van Winden, De Idololatria. Leiden and New York: E. J. Brill, 1987. Virgil. "Aeneid." In Virgil. 2 volumes. Vol. 1. Translated by H. Rushton Fairclough; revised by G. P. Goold. Loeb Classical Library. Cambridge, MA: Harvard University Press, 1916-18; revised 1999..
https://ru.scribd.com/document/219460187/Warren-Carter-the-Roman-Eempire-and-the-New-Testament
CC-MAIN-2019-47
refinedweb
49,982
62.58
I’ve been thinking some more about deployment of Python web applications, and deployment in general (in part leading up to the Web Summit). And I’ve got an idea. I wrote about this about a year ago and recently revised some notes on a proposal but I’ve been thinking about something a bit more basic: a way to simply ship server applications, bundles of code. Web applications are just one use case for this. For now lets call this a “Python application package”. It has these features: - There is an application description: this tells the environment about the application. (This is sometimes called “configuration” but that term is very confusing and overloaded; I think “description” is much clearer.) - Given the description, you can create an execution environment to run code from the application and acquire objects from the application. So there would be a specific way to setup sys.path, and a way to indicate any libraries that are required but not bundled directly with the application. - The environment can inject information into the application. (Also this sort of thing is sometimes called “configuration”, but let’s not do that either.) This is where the environment could indicate, for instance, what database the application should connect to (host, username, etc). - There would be a way to run commands and get objects from the application. The environment would look in the application description to get the names of commands or objects, and use them in some specific manner depending on the purpose of the application. For instance, WSGI web applications would point the environment to an application object. A Tornado application might simply have a command to start itself (with the environment indicating what port to use through its injection). There’s a lot of things you can build from these pieces, and in a sophisticated application you might use a bunch of them at once. You might have some WSGI, maybe a seperate non-WSGI server to handle Web Sockets, something for a Celery queue, a way to accept incoming email, etc. In pretty much all cases I think basic application lifecycle is needed: commands to run when an application is first installed, something to verify the environment is acceptable, when you want to back up its data, when you want to uninstall it. There’s also some things that all environments should setup the same or inject into the application. E.g., $TMPDIR should point to a place where the application can keep its temporary files. Or, every application should have a directory (perhaps specified in another environmental variable) where it can write log files. Details? To get more concrete, here’s what I can imagine from a small application description; probably YAML would be a good format: platform: python, wsgi require: os: posix python: <3 rpm: m2crypto deb: python-m2crypto pip: requirements.txt python: paths: vendor/ wsgi: app: myapp.wsgiapp:application I imagine platform as kind of a series of mixins. This system doesn’t really need to be Python-specific; when creating something similar for Silver Lining I found PHP support relatively easy to add (handling languages that aren’t naturally portable, like Go, might be more of a stretch). So python is one of the features this application uses. You can imagine lots of modularization for other features, but it would be easy and unproductive to get distracted by that. The application has certain requirements of its environment, like the version of Python and the general OS type. The application might also require libraries, ideally one libraries that are not portable (M2Crypto being an example). Modern package management works pretty nicely for this stuff, so relying on system packages as a first try I believe is best (I’d offer requirements.txt as a fallback, not as the primary way to handle dependencies). I think it’s much more reliable if applications primarily rely on bundling their dependencies directly (i.e., using a vendor directory). The tool support for this is a bit spotty, but I believe this package format could clarify the problems and solutions. Here is an example of how you might set up a virtualenv environment for managing vendor libraries (you then do not need virtualenv to use those same libraries), and do so in a way where you can check the results into source control. It’s kind of complicated, but works (well, almost works - bin/ files need fixing up). It’s a start at least. Support Library On the environment side we need a good support library. pywebapp has some of the basic features, though it is quite incomplete. I imagine a library looking something like this: from apppackage import AppPackage app = AppPackage('/var/apps/app1.2012.02.11') # Maybe a little Debian support directly: subprocess.call(['apt-get', 'install'] + app.config['require']['deb']) # Or fall back of virtualenv/pip app.create_virtualenv('/var/app/venvs/app1.2012.02.11') app.install_pip_requirements() wsgi_app = app.load_object(app.config['wsgi']['app']) You can imagine building hosting services on this sort of thing, or setting up continuous integration servers (app.run_command(app.config['unit_test'])), and so forth. Local Development If designed properly, I think this format is as usable for local development as it is for deployment. It should be able to run directly from a checkout, with the “development environment” being an environment just like any other. This rules out, or at least makes less exciting, the use of zip files or tarballs as a package format. The only justification I see for using such archives is that they are easy to move around; but we live in the FUTURE and there are many ways to move directories around and we don’t need to cater to silly old fashions. If that means a script that creates a tarball, FTPs it to another computer, and there it is unzipped, then fine - this format should not specify anything about how you actually deliver the files. But let’s not worry about copying WARs.
http://www.ianbicking.org/blog/2012/02/python-application-package.html
CC-MAIN-2018-43
refinedweb
1,000
53.71
project Sir, I have required coding and design for bank management system in php mysql. I hope u can give me correct information. Please visit the link: JSP Bank Application The above link will be helpful for JSP Project JSP Project Register.html <html> <body > <form...; <%! %> <jsp:useBean <jsp:setProperty < FileUpload and Download FileUpload and Download Hello Sir/Madam, I have used the below coding for Upload and download file, but it is not stored in database and also it s not download the file with its content... just it download doc with 0 Bytes Fileupload in servlet Fileupload in servlet If we upload a file using servlet can it possible to put the uploaded file in any locationof the syatem(in D drive or in C drive)??? If any pls give the code.. Thanks in advance.... import fileupload - JDBCSF-fileupload-ajax - Development process JSF-fileupload-ajax for the above code , iam able to bind the contractname & contractNo but i am unable to bind the upload property in the bean it is giving null value MCA Project Training · MCA Project Training in Java Servlet Pages (JSP) · MCA... MCA Project Training  ... not the thorough experience, while MCA live project training let feel you the complete i need a project in java backend sql project project suggest a network security project based on core java with explanation PROJECT PROJECT i want online polling system project codding Java Project Deployment Java Project Deployment Hi I have created a website using servlet and JSP. There are three type of users on the website namely administrator, student and teacher. Now I need to install this website on all the systems JSF-fileupload-ajax - Java Server Faces Questions JSF-fileupload-ajax hi i am upload the file JSF with ajax i am using . i create 4 panel tabs in one panel tab i used .i want file to be uploaded using , but i am getting Nullpointer Exception when i try to get the file no def found - JSP-Servlet no def found i have used the code of file upload from rose india but when i run no def found for fileupload exception although i have put jar file in lib folder code i get from D:\project\Uploading Employee Profile project in project
http://www.roseindia.net/tutorialhelp/comment/86873
CC-MAIN-2014-52
refinedweb
378
55.47
I have int year, month, day, hour, min, sec How can I get the epoch time in C++? I am having difficulty figuring it out using Boost, any examples or alternative ways to do it? Fill in a struct tm with your values, then call std::mktime(). I'm assuming by "get the epoch time" you mean the number of seconds since the epoch, i.e. Unix time_t. You're making it too complicated. Have a look at the time functions in the standard libraries. This boost example should do what you ask for if I did understand your problem correctly. #include <iostream> #include <sys/time.h> int main () { unsigned long int seconds = time(NULL); std::cout << seconds << std::endl; return 0; }
http://m.dlxedu.com/m/askdetail/3/5f47a877ba210ff84081e80c746f3c29.html
CC-MAIN-2018-22
refinedweb
122
75.61
In this tutorial, we will learn about flask redirect and how to use it in our application. Why do we need to set up redirects? Before going to the implementation, let us first know what redirecting actually is! So as the name suggests, the redirect function, when called, basically redirects the Webpage to another URL. It is an essential part of web applications and also increases the efficiency of the application. - Take an example of Twitter; if you are not already logged in, then when you hit the Twitter URL (), you are redirected to the log-in page first. Here the redirect function plays its role. - Similarly, during an online transaction, once the payment is made, you are redirected to the confirmation page. - Another benefit of redirecting is that it helps in URL shortening—for example,. Here you type a short URL and are then redirected to the original long one. Now that we know why it is used let’s move onto the Hands-on section. Implementing a Flask Redirect Now we will code a little application using the Flask redirect function. But first, we will see the redirect function syntax. 1. Syntax of Flask redirect attribute The syntax for redirect: redirect(location, code, response = None) where: - location: Target location of the final webpage - Status Code: These are the HTTP redirect status code, to indicate the output of the action. Defaults to 302 - Response: Response calss to use when initiating the response. We dont need to care much about the last one right now. Some of the other status codes are: Note: We first need to import the redirect attribute before using it. from flask import redirect 2. Error Handling for Redirect Flask also has a abort() function for the special redirect failure cases. The syntax for abort() function: abort(<error_code>) The various Error Codes are as follows: Note: We need to import this attribute first as well. from flask import abort 3. Code for our application Now consider the following example code: from flask import Flask,render_template,request,redirect app = Flask(__name__) @app.route('/form') def form(): return render_template('form.html') @app.route('/verify', methods = ['POST', 'GET']) def verify(): if request.method == 'POST': name = request.form['name'] return redirect(f"/user/{name}") @app.route('/user/<name>') def user(name): return f"Your name is {name}" app.run(host='localhost', port=5000) Here: - The Form View simply displays the Form Template to the user. - When the user submits the Form, the form data is sent, along with request, to the Verify View. (Look at form.html – action attribute) - The Verify View, pulls out the name data from the form and then redirects the user to the User View (along with the name data). Do check out our Introduction to Flask article if you have any trouble understanding the syntax. The form.html is: <form action="/verify" method = "POST"> <p>name <input type = "text" name = "name" /></p> <p><input type = "submit" value = "Submit" /></p> </form> We are using a Flask form to take input from the user and then redirect it to a webpage showing the name back. Here, the sequence is: - The form function shows the Form. - Once the user submits his name, the verify function pulls out the name from the Form and redirects him to the User function. - The User function takes in the name as an argument and shows it on the webpage. 4. Implementation of the Code Now run the server and check it out Hit submit That’s it guys !! The name is appearing up there. Conclusion That’s it guys for this tutorial !! Try to figure out how to include the abort function in your code as a practise. We will see you guys in the next article !! Till then, Happy coding 🙂
https://www.askpython.com/python-modules/flask/flask-redirect-url
CC-MAIN-2021-10
refinedweb
629
65.62
Measuring Performance with PStats QUICK INTRODUCTION PStats is Panda’s built-in performance analysis tool. It can graph frame rate over time, and can further graph the work spent within each frame into user-defined subdivisions of the frame (for instance, app, cull and draw), and thus can be an invaluable tool in identifying performance bottlenecks. It can also show frame-based data that reflects any arbitrary quantity other than time intervals, for instance, texture memory in use or number of vertices drawn. The performance graphs may be drawn on the same computer that is running the Panda client, or they may be drawn on another computer on the same LAN, which is useful for analyzing fullscreen applications. The remote computer need not be running the same operating system as the client computer. To use PStats, you first need to build the PStats server program, which is part of the Pandatool tree (it’s called pstats.exe on Windows, and pstats on a Unix platform). Start by running the PStats server program (it runs in the background), and then start your Direct/Panda client with the following in your startup code: // Includes: pStatClient.h if (PStatClient::is_connected()) { PStatClient::disconnect(); } string host = ""; // Empty = default config var value int port = -1; // -1 = default config var value if (!PStatClient::connect(host, port)) { std::cout << "Could not connect to PStat server." << std::endl; } Or if you’re running pview, press shift-S. Any of the above will contact your running PStats server program, which will proceed to open a window and start a running graph of your client’s performance. If you have multiple computers available for development, it can be advantageous to run the pstats server on a separate computer so that the processing time needed to maintain and update the pstats user interface isn’t taken from the program you are profiling. If you wish to run the server on a different machine than the client, start the server on the profiling machine and add the following variable to your client’s Config.prc file, naming the hostname or IP address of the profiling machine: pstats-host profiling-machine-ip-or-hostname If you are developing Python code, you may be interested in reporting the relative time spent within each Python task (by subdividing the total time spent in Python, as reported under “Show Code”). To do this, add the following lines to your Config.prc file before you start ShowBase: task-timer-verbose 1 pstats-tasks 1 Caveats OpenGL is asynchronous, which means that function calls aren’t guaranteed to execute right away. This can make performance analysis of OpenGL operations difficult, as the graphs may not accurately reflect the actual time that the GPU spends doing a certain operation. However, if you wish to more accurately track down rendering bottlenecks, you may set the following configuration variable: pstats-gpu-timing 1 This will enable a new set of graphs that use timer queries to measure how much time each task is actually taking on the GPU. If your card does not support it or does not give reliable timer query information, a crude way of working around this and getting more accurate timing breakdown, you can set this: gl-finish 1 Setting this option forces Panda to call glFinish() after every major graphics operation, which blocks until all graphics commands sent to the graphics processor have finished executing. This is likely to slow down rendering performance substantially, but it will make PStats graphs more accurately reflect where the graphics bottlenecks are. THE PSTATS SERVER (The user interface) The GUI for managing the graphs and drilling down to view more detail is entirely controlled by the PStats server program. At the time of this writing, there are two different versions of the PStats server, one for Unix and one for Windows, both called simply pstats. The interfaces are similar but not identical; the following paragraphs describe the Windows version. When you run pstats.exe, it adds a program to the taskbar but does not immediately open a window. The program name is typically “PStats 5185”, showing the default PStats TCP port number of 5185; see “HOW IT WORKS” below for more details about the TCP communication system. For the most part you don’t need to worry about the port number, as long as server and client agree (and the port is not already being used by another application). Each time a client connects to the PStats server, a new monitor window is created. This monitor window owns all of the graphs that you create to view the performance data from that particular connection. Initially, a strip chart showing the frame time of the main thread is created by default; you can create additional graphs by selecting from the Graphs pulldown menu. Time-based Strip Charts This is the graph type you will use most frequently to examine performance data. The horizontal axis represents the passage of time; each frame is represented as a vertical slice on the graph. The overall height of the colored bands represents the total amount of time spent on each frame; within the frame, the time is further divided into the primary subdivisions represented by different color bands (and labeled on the left). These subdivisions are called “collectors” in the PStats terminology, since they represent time collected by different tasks. Normally, the three primary collectors are App, Cull, and Draw, the three stages of the graphics pipeline. Atop these three colored collectors is the label “Frame”, which represents any remaining time spent in the frame that was not specifically allocated to one of the three child collectors (normally, there should not be significant time reported here). The frame time in milliseconds, averaged over the past three seconds, is drawn above the upper right corner of the graph. The labels on the guide bars on the right are also shown in milliseconds; if you prefer to think about a target frame rate rather than an elapsed time in milliseconds, you may find it useful to select “Hz” from the Units pulldown menu, which changes the time units accordingly. The running Panda client suggests its target frame rate, as well as the initial vertical scale of the graph (that is, the height of the colored bars). You can change the scale freely by clicking within the graph itself and dragging the mouse up or down as necessary. One of the horizontal guide bars is drawn in a lighter shade of gray; this one represents the actual target frame rate suggested by the client. The other, darker, guide bars are drawn automatically at harmonic subdivisions of the target frame rate. You can change the target frame rate with the Config.prc variable pstats-target-frame-rate on the client. You can also create any number of user-defined guide bars by dragging them into the graph from the gray space immediately above or below the graph. These are drawn in a dashed blue line. It is sometimes useful to place one of these to mark a performance level so it may be compared to future values (or to alternate configurations). The primary collectors labeled on the left might themselves be further subdivided, if the data is provided by the client. For instance, App is often divided into Show Code, Animation, and Collisions, where Show Code is the time spent executing any Python code, Animation is the time used to compute any animated characters, and Collisions is the time spent in the collision traverser(s). To see any of these further breakdowns, double-click on the corresponding colored label (or on the colored band within the graph itself). This narrows the focus of the strip chart from the overall frame to just the selected collector, which has two advantages. Firstly, it may be easier to observe the behavior of one particular collector when it is drawn alone (as opposed to being stacked on top of some other color bars), and the time in the upper-right corner will now reflect just the total time spent within just this collector. Secondly, if there are further breakdowns to this collector, they will now be shown as further colored bars. As in the Frame chart, the topmost label is the name of the parent collector, and any time shown in this color represents time allocated to the parent collector that is not accounted for by any of the child collectors. You can further drill down by double-clicking on any of the new labels; or double-click on the top label, or the white part of the graph, to return back up to the previous level. Value-based Strip Charts There are other strip charts you may create, which show arbitrary kinds of data per frame other than elapsed time. These can only be accessed from the Graphs pulldown menu, and include things such as texture memory in use and vertices drawn. They behave similarly to the time-based strip charts described above. Piano Roll Charts This graph is used less frequently, but when it is needed it is a valuable tool to reveal exactly how the time is spent within a frame. The PStats server automatically collects together all the time spent within each collector and shows it as a single total, but in reality it may not all have been spent in one continuous block of time. For instance, when Panda draws each display region in single-threaded mode, it performs a cull traversal followed by a draw traversal for each display region. Thus, if your Panda client includes multiple display regions, it will alternate its time spent culling and drawing as it processes each of them. The strip chart, however, reports only the total cull time and draw time spent. Sometimes you really need to know the sequence of events in the frame, not just the total time spent in each collector. The piano roll chart shows this kind of data. It is so named because it is similar to the paper music roll for an old- style player piano, with holes punched down the roll for each note that is to be played. The longer the hole, the longer the piano key is held down. (Think of the chart as rotated 90 degrees from an actual piano roll. A player piano roll plays from bottom to top; the piano roll chart reads from left to right.) Unlike a strip chart, a piano roll chart does not show trends; the chart shows only the current frame’s data. The horizontal axis shows time within the frame, and the individual collectors are stacked up in an arbitrary ordering along the vertical axis. The time spent within the frame is drawn from left to right; at any given time, the collector(s) that are active will be drawn with a horizontal bar. You can observe the CPU behavior within a frame by reading the graph from left to right. You may find it useful to select “pause” from the Speed pulldown menu to freeze the graph on just one frame while you read it. Note that the piano roll chart shows time spent within the frame on the horizontal axis, instead of the vertical axis, as it is on the strip charts. Thus, the guide bars on the piano roll chart are vertical lines instead of horizontal lines, and they may be dragged in from the left or the right sides (instead of from the top or bottom, as on the strip charts). Apart from this detail, these are the same guide bars that appear on the strip charts. The piano roll chart may be created from the Graphs pulldown menu. Additional threads If the panda client has multiple threads that generate PStats data, the PStats server can open up graphs for these threads as well. Each separate thread is considered unrelated to the main thread, and may have the same or an independent frame rate. Each separate thread will be given its own pulldown menu to create graphs associated with that thread; these auxiliary thread menus will appear on the menu bar following the Graphs menu. At the time of this writing, support for multiple threads within the PStats graph is largely theoretical and untested. Color and Other Optional Collector Properties If you do not specify a color for a particular collector, it will be assigned a random color at runtime. At present, the only way to specify a color is to modify panda/src/pstatclient/pStatProperties.cxx, and add a line to the table for your new collector(s). You can also define additional properties here such as a suggested initial scale for the graph and, for non-time-based collectors, a unit name and/or scale factor. The order in which these collectors are listed in this table is also relevant; they will appear in the same order on the graphs. The first column should be set to 1 for your new collectors unless you wish them to be disabled by default. You must recompile the client (but not the server) to reflect changes to this table. HOW TO DEFINE YOUR OWN COLLECTORS The PStats client code is designed to be generic enough to allow users to define their own collectors to time any arbitrary blocks of code (or record additional non-time-based data), from either the C++ or the Python level. The general idea is to create a PStatCollector for each separate block of code you wish to time. The name which is passed to the PStatCollector constructor is a unique identifier: all collectors that share the same name are deemed to be the same collector. Furthermore, the collector’s name can be used to define the hierarchical relationship of each collector with other existing collectors. To do this, prefix the collector’s name with the name of its parent(s), followed by a colon separator. For instance, PStatCollector(“Draw:Flip”) defines a collector named “Flip”, which is a child of the “Draw” collector, defined elsewhere. You can also define a collector as a child of another collector by giving the parent collector explicitly followed by the name of the child collector alone, which is handy for dynamically-defined collectors. For instance, PStatCollector(draw, “Flip”) defines the same collector named above, assuming that draw is the result of the PStatCollector(“Draw”) constructor. Once you have a collector, simply bracket the region of code you wish to time with collector.start() and collector.stop(). It is important to ensure that each call to start() is matched by exactly one call to stop(). If you are programming in C++, it is highly recommended that you use the PStatTimer class to make these calls automatically, which guarantees the correct pairing; the PStatTimer’s constructor calls start() and its destructor calls stop(), so you may simply define a PStatTimer object at the beginning of the block of code you wish to time. If you are programming in Python, you must call start() and stop() explicitly. When you call start() and there was another collector already started, that previous collector is paused until you call the matching stop() (at which time the previous collector is resumed). That is, time is accumulated only towards the collector indicated by the innermost start() .. stop() pair. Time accumulated towards any collector is also counted towards that collector’s parent, as defined in the collector’s constructor (described above). It is important to understand the difference between collectors nested implicitly by runtime start/stop invocations, and the static hierarchy implicit in the collector definition. Time is accumulated in parent collectors according to the statically-defined parents of the innermost active collector only, without regard to the runtime stack of paused collectors. For example, suppose you are in the middle of processing the “Draw” task and have therefore called start() on the “Draw” collector. While in the middle of processing this block of code, you call a function that has its own collector called “Cull:Sort”. As soon as you start the new collector, you have paused the “Draw” collector and are now accumulating time in the “Cull:Sort” collector. Once this new collector stops, you will automatically return to accumulating time in the “Draw” collector. The time spent within the nested “Cull:Sort” collector will be counted towards the “Cull” total time, not the “Draw” total time. If you wish to collect the time data for functions, a simple decorator pattern can be used below, as below: from panda3d.core import PStatCollector def pstat(func): collectorName = "Debug:%s" % func.__name__ if hasattr(base, 'custom_collectors'): if collectorName in base.custom_collectors.keys(): pstat = base.custom_collectors[collectorName] else: base.custom_collectors[collectorName] = PStatCollector(collectorName) pstat = base.custom_collectors[collectorName] else: base.custom_collectors = {} base.custom_collectors[collectorName] = PStatCollector(collectorName) pstat = base.custom_collectors[collectorName] def doPstat(*args, **kargs): pstat.start() returned = func(*args, **kargs) pstat.stop() return returned doPstat.__name__ = func.__name__ doPstat.__dict__ = func.__dict__ doPstat.__doc__ = func.__doc__ return doPstat To use it, either save the function to a file and import it into the script you wish to debug. Then use it as a decorator on the function you wish to time. A collection named Debug will appear in the Pstats server with the function as its child. from pstat_debug import pstat @pstat def myLongRunFunction(): """ This function does something long """ HOW IT WORKS (What’s actually happening) The PStats code is divided into two main parts: the client code and the server code. The PStats Client The client code is in panda/src/pstatclient, and is available to run in every Panda client unless it is compiled out. (It will be compiled out if OPTIMIZE is set to level 4, unless DO_PSTATS is also explicitly set to non-empty. It will also be compiled out if NSPR is not available, since both client and server depend on the NSPR library to exchange data, even when running the server on the same machine as the client.) The client code is designed for minimal runtime overhead when it is compiled in but not enabled (that is, when the client is not in contact with a PStats server), as well as when it is enabled (when the client is in contact with a PStats server). It is also designed for zero runtime overhead when it is compiled out. There is one global PStatClient class object, which manages all of the communications on the client side. Each PStatCollector is simply an index into an array stored within the PStatClient object, although the interface is intended to hide this detail from the programmer. Initially, before the PStatClient has established a connection, calls to start() and stop() simply return immediately. When you call PStatClient.connect(), the client attempts to contact the PStatServer via a TCP connection to the hostname and port named in the pstats- host and pstats-port Config.prc variables, respectively. (The default hostname and port are localhost and 5185.) You can also pass in a specific hostname and/or port to the connect() call. Upon successful connection and handshake with the server, the PStatClient sends a list of the available collectors, along with their names, colors, and hierarchical relationships, on the TCP channel. Once connected, each call to start() and stop() adds a collector number and timestamp to an array maintained by the PStatClient. At the end of each frame, the PStatClient boils this array into a datagram for shipping to the server. Each start() and stop() event requires 6 bytes; if the resulting datagram will fit within a UDP packet (1K bytes, or about 84 start/stop pairs), it is sent via UDP; otherwise, it is sent on the TCP channel. (Some fraction of the packets that are eligible for UDP, from 0% to 100%, may be sent via TCP instead; you can specify this with the pstats-tcp-ratio Config.prc variable.) Also, to prevent flooding the network and/or overwhelming the PStats server, only so many frames of data will be sent per second. This parameter is controlled by the pstats-max-rate Config.prc variable and is set to 30 by default. (If the packets are larger than 1K, the max transmission rate is also automatically reduced further in proportion.) If the frame rate is higher than this limit, some frames will simply not be transmitted. The server is designed to cope with missing frames and will assume missing frames are similar to their neighbors. The server does all the work of analyzing the data after that. The client’s next job is simply to clear its array and prepare itself for the next frame. The PStats Server The generic server code is in pandatool/src/pstatserver, and the GUI-specific server code is in pandatool/src/gtk-stats and pandatool/src/win-stats, for Unix and Windows, respectively. (There is also an OS-independent text-stats subdirectory, which builds a trivial PStats server that presents a scrolling- text interface. This is mainly useful as a proof of technology rather than as a usable tool.) The GUI-specific code is the part that manages the interaction with the user via the creation of windows and the handling of mouse input, etc.; most of the real work of interpreting the data is done in the generic code in the pstatserver directory. The PStatServer owns all of the connections, and interfaces with the NSPR library to communicate with the clients. It listens on the specified port for new connections, using the pstats-port Config.prc variable to determine the port number (this is the same variable that specifies the port to the client). Usually you can leave this at its default value of 5185, but there may be some cases in which that port is already in use on a particular machine (for instance, maybe someone else is running another PStats server on another display of the same machine). Once a connection is received, it creates a PStatMonitor class (this class is specialized for each of the different GUI variants) that handles all the data for this particular connection. In the case of the windows pstats.exe program, each new monitor instance is represented by a new toplevel window. Multiple monitors can be active at once. The work of digesting the data from the client is performed by the PStatView class, which analyzes the pattern of start and stop timestamps, along with the relationship data of the various collectors, and boils it down into a list of the amount of time spent in each collector per frame. Finally, a PStatStripChart or PStatPianoRoll class object defines the actual graph output of colored lines and bars; the generic versions of these include virtual functions to do the actual drawing (the GUI specializations of these redefine these methods to make the appropriate calls).
https://docs.panda3d.org/1.10/cpp/optimization/using-pstats
CC-MAIN-2022-27
refinedweb
3,797
58.01
ReduceLROnPlateau is a scheduling technique that monitors a quantity and decays the learning rate when the quantity stops improving. The improvement of the quantity is based on whether it increases or decreases by a certain minimum amount. This minimum amount is the threshold. The user is able to define one of the two modes : min and max. If max is chosen, then the learning rate is decayed once the monitored quantity stops increasing by a certain minimum threshold. If min is chosen, then the learning rate is decayed once the monitored quantity stops decreasing by a certain minimum threshold. It is the factor by which the learning rate is decreased when the quantity stops improving. The factor value should be greater than 0 and less than 1. If the value is greater than 1, then the learning rate will explode. If the factor is 1, then it would never decay the learning rate. It is number of epochs with no improvement after which the learning rate is reduced. If the patience is 10, then it ignores the first 10 epochs with no improvement in the quantity and reduces the learning rate in the 11th epoch. It is the minimum value by which the quantity should change in order to count as an "improvement". For example, if threshold is 0.001 and the monitored quantity changes from 0.003 to 0.0025, then this is not counted as improvement. The user is able to choose rel or abs for the threshold mode. It essentially defines the way in which a dynamic threshold is calculated. Mathematically in rel mode: dynamic threshold = best * (1+ threshold) in 'max' mode or best * (1- threshold) in 'min' mode. In abs mode: dynamic threshold = best + threshold in 'max' mode or best - threshold in 'min' mode. It is the number of epochs that the tool would wait after the reduction of the learning rate before resuming the normal operations. It is the minimum learning rate for all the parameters. The learning rate would be this constant minimum once it reaches it. It is the minimum amount of decay set for the learning rate. difference between the previous learning rate and the current learning rate is less than Eps, then this decay is ignored and the previous learning rate is used. import torch model = [Parameter(torch.randn(2, 2, requires_grad=True))] optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate, weight_decay=0.01, amsgrad=False) scheduler=torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08).
https://hasty.ai/docs/mp-wiki/scheduler/reducelronplateau
CC-MAIN-2022-40
refinedweb
430
59.19
01:06 PM 3/26/2001 -0500, Terrel Shumway wrote: >Of course, if you just want rapid deployment, and don't have sensitive >content/code, this may be just the ticket. YMMV. Yes, I think it depends on how paranoid you feel you have to be for a particular site. Although, regardless of how much care you take under the hood, you still need a test suite that tries to break your security (and hopefully that test suite is automated). > my $0.02 > -- Terrel > >(Try after a "quick fix".) I would hope that a production site would have such files stripped out or at least reported, again preferably by a script that runs automatically (via cron). And ExtensionsToServe will address this in 1.0. Chuck Esterbrook wrote: >. This sounds like acquisition in training. Based on my experience with Zope and acquisition, I would say: There are times when it saves a little effort (thought and planning) in the short run, but from a security perspective, it make the site a nightmare to verify. Zope has a lot of code that deals with making sure that an object has the proper security attributes to be used in a given context. Webware has no such support (and probably will not for a while -- Zope, with several paid full-time developers, is still trying on getting it right after two years of wide public exposure.) My recommendation is exactly what Chuck recommended to me a few months ago, when I complained about security and acquisition: Unless a file represents a concrete document (SitePage is an abstract document), keep it out of the document tree. Put SitePage.py in a lib/ directory in a MySite/ package, then say: from MySite.SitePage import SitePage The grey hair you save will be your own. Of course, if you just want rapid deployment, and don't have sensitive content/code, this may be just the ticket. YMMV. my $0.02 -- Terrel (Try after a "quick fix".) Chuck Esterbrook wrote: > >(Try after a "quick fix".) > > I would hope that a production site would have such files stripped out or > at least reported, again preferably by a script that runs automatically > (via cron). > Exactly: security in depth. (BTW, there should be no "quick fix" for a high-paranoia production site.) > And ExtensionsToServe will address this in 1.0. That would certainly reduce the problem. Will ExtensionsToServe return "404 Not Found" if the client sends the full name as suggested above, or will it only kick in to generate extensions if the specified file does not exist? At 03:59 PM 3/26/2001 -0500, Terrel Shumway wrote: >That would certainly reduce the problem. Will ExtensionsToServe return >"404 Not Found" if the client sends the full name as suggested above, or >will it only kick in to generate extensions if the specified file does not >exist? I don't even understand what "kick in to generate extensions..." means. My thought for ExtensionsToServe was to generate 404 for any request whose server side path did NOT have an extension found in ExtensionsToServe. After that check clears, it's business as usual. -Chuck I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/webware/mailman/message/7617970/
CC-MAIN-2016-30
refinedweb
564
71.14
Michal,On Sun, 11 May 2008, Michal Simek wrote:> > > > Please split this modification code out into a separate function. The> > same code is used below.> > Add Macro.Please use a (inline) function whenever possible. Macros are harder toread and not type safe.> > Can you please move a new architecture to clockevents / clocksource> > right from the beginning ? No need to invent another incompatible set> > of time(r) related functions.> > I move whole code to GENERIC_TIME. Did you meant any others changes?GENERIC_TIME and GENERIC_CLOCKEVENTS. You get high resolution timersand dynamic ticks for free when your timer hardware allows it.> >> +#define NO_IRQ (-1)> >> +> >> +static inline int irq_canonicalize(int irq)> >> +{> >> + return (irq);> >> +}> > > > Why is this needed ? Any users ?> > is used in serial_core.c> 684 new_serial.irq = irq_canonicalize(new_serial.irq);Doh, forgot about that one.Thanks, tglx
https://lkml.org/lkml/2008/5/11/66
CC-MAIN-2016-36
refinedweb
133
62.64
CRUD : Cannot create model with undefined play.db.jpa.Blob field Reported by Erwan Loisant | October 5th, 2010 @ 11:56 AM | in 1.1 I have a model with a "Blob" property : import play.db.jpa.Blob; import play.db.jpa.Model; @Entity public class Article extends Model { public String name; public Blob image; } Image property is not mandatory (can be NULL in db), but when I try to create an article instance with the crud module, without specify the image value, play! crash with the following error : 10:15:49,284 INFO ~ Connected to jdbc:mysql://localhost/mypc?useUnicode=yes&characterEncoding=UTF-8&connectionCollation=utf8_general_ci 10:15:49,948 DEBUG ~ insert into Article (name, image) values (?, ?) com.mysql.jdbc.JDBC4PreparedStatement@1ea6778: insert into Article (name, image) values ('Blabla', NOT SPECIFIED ) 10:15:49,949 WARN ~ SQL Error: 0, SQLState: 07001 10:15:49,949 ERROR ~ No value specified for parameter 2 10:15:50,057 ERROR ~ If I specify the image value, the article is correctly saved. Imported from Launchpad: Erwan Loisant October 6th, 2010 @ 09:56 AM - Tag set to crud Niko Schmuck October 20th, 2010 @ 05:01 PM - Tag changed from crud to binary, blob, crud, file-attachment This bug affects me too. Is there a rough estimation on when it will be fixed? Thanks, Niko Guillaume Bort October 20th, 2010 @ 05:20 PM - Assigned user set to Guillaume Bort Is it MySQL specific? Works for me with HSQL. Play Duck October 20th, 2010 @ 05:32 PM (from [17d97affbcc29f5d1d6f4d9ad9037ea522edce2f]) [#31] Always specify all SQL values for Blob type, even NULL ones (fails on MySQL)... Play Duck October 20th, 2010 @ 05:32 PM (from [1cd3822b8ff159bb2ef94bc22fd5a67d21cf0bd8]) [#31] Always specify all SQL values for Blob type, even NULL ones (fails on MySQL)... Guillaume Bort October 20th, 2010 @ 05:32 PM - State changed from new to resolved - Milestone set to 1.1 Ok yes, MySQL specific. And fixed. Niko Schmuck October 21st, 2010 @ 09:57 PM Thanks for fixing this. Since the MySQL JDBC driver is coming with the play distribution, IMHO there is certain perception in the user base that it will work out-of-the-box. Would it be possible to have the (CI) test suite running on HSQL as well as on MySQL? Guillaume Bort October 22nd, 2010 @ 07:21 AM Yes we should probably do that. But it's way more complicated to setup. We'll do that in the future. Erwan Loisant December 15th, 2010 @ 09:04 AM - Tag changed from binary, blob, crud, file-attachment to crud - Milestone order changed from 16 31 CRUD : Cannot create model with undefined play.db.jpa.Blob field (from [17d97affbcc29f5d1d6f4d9ad9037ea522edce2f]) [#31] A... 31 CRUD : Cannot create model with undefined play.db.jpa.Blob field (from [1cd3822b8ff159bb2ef94bc22fd5a67d21cf0bd8]) [#31] A...
https://play.lighthouseapp.com/projects/57987/tickets/31-crud-cannot-create-model-with-undefined-playdbjpablob-field
CC-MAIN-2019-30
refinedweb
459
56.05
I. If I was designing these SVGs myself, I would not have run into many of these issues, but it’s a more likely scenario that the engineer and designer are not the same person on a project. In this post, I'll chronicle some of what I trained them on to ensure stellar performance for SVGs. Note: I use Illustrator for the creation and optimization of my SVGs because I've found the export and tooling superior to Sketch's. I'll readily admit that there might be ways that Sketch works with SVGs that I'm not aware of. But I will say I have seen it make strange <clipPath>s in the place of paths, which makes me steer clear of it. For the examples below, I'll be using Illustrator. Use whatever works for you. Talk Upfront This piece is not always possible, but whenever it is, try to talk to the designer before they do a lot of the work, to explain what they should be thinking about when they are creating SVGs. The most easy to convey piece should be that simply drawing something on paper and then tracing it in illustrator will come with a lot of junky path data and should never be used as-is. Simple shapes and pen drawing paths are preferred. Very complex objects can become large very quickly, so the less points the path has to draw, the better for performance. This doesn’t mean that you can’t make seemingly complex shapes. But hundreds of path points can sometimes have the same appearance and interest that thousands of path points do. Reduce Path Points If you’re going to create a hand drawing, you can trace, it, but past that point you should use Object > Path > Simplify. You will need to check the box that allows for preview because this can potentially ruin the image. It's also worth it to say that the image degrades quickly, so usually the most I can get away with is 91% or so. This still gives me a good return, with a high number of path point reduction. This is also probably the quickest way to accomplish this type of reduction. A more labor intensive way, that I will use for smaller pieces that are unnecessarily complex, is to redraw it with the pen tool. Sometimes this is very little effort for a large payoff, but it really depends on the shape. You can also put a few shapes together, merge them with the path tool, and then modify the points with the white arrow, to simulate existing shapes. It may seem intimidating at first, but you can use the pen tool to really quickly make more complex areas. Then take all of these shapes and use the pathfinder tool to merge them all together. If it doesn’t look quite right, don’t fear! You can still reduce the opacity on what you made by a little (helps so that you can see what you’re trying to emulate in the shape underneath). Then you can grab the direct selection tool, (A in quickkeys, the white arrow on the toolbar), and drag those small points around until you get a more refined shape. Never hurts to zoom in a bit to see the details there. Remove Repeated Gradient Defs By default, Illustrator and other vector editing tools will, at best, create a gradient and put them in defs, but at worst, create jpgs of the gradient or add many separate gradients even though just one can be reused. In the latter case, use Jake Albaugh’s gradient optimizer. It’s a smart tool that will collapse multiple unused gradients into only what’s necessary. I've seen it reduce the file size of an SVG by half, though that was a file with an unusual amount of similar gradients. In the case of the former, you might find you can write the gradient by hand instead of using the png or jpg that the editor provides. Here are the values that SVG needs to create a gradient: - It needs to be contained within a linearGradient block, and needs to have an id so that you can reference it in the CSS to apply it to SVG elements - It uses stop offsets from 0-1 with stop-color attributes where you specify what colors you want at which points. <defs> <linearGradient id="linear-gradient" y1="75" x2="150" y2="75" gradientUnits="userSpaceOnUse"> <stop offset="0" stop- <stop offset=".5" stop- <stop offset="1" stop- </linearGradient> </defs> .path-class { fill: url(#linear-gradient); } Reduce the size of your Canvas Making your canvas not too large but not too small helps with the weight of the file because the larger the number, the larger the number of all of the path points. Too small, and you might get a lot of decimals, that when trimmed, warp the image. If you have a nice range (I prefer somewhere like 100 x 100- but this is worth experimenting with), your path points will be small as well without breaking into decimals. To quickly change the size of the artboard in Illustrator (the viewBox in SVG), you can go to Object > Artboards > Fit to Artwork Bounds. Sometimes you will want to be a little more precise about it, and in that case go to File > Document Setup > Edit Artboards. This will allow you to hand-tweak the visible area or even specify the units you want precisely. You may have to change the size of the artwork within a little after doing so as well. Export, then Optimize I prefer to use Illustrator because the export settings for SVG are more advanced than Sketch. I don’t use Inkscape but I know some people love it. If you are using Illustrator, use Export As > SVG not Save As > SVG for better results. Even after that step, though, I optimize. Here are some options: - SVGOMG- this is a web-based editor that uses SVGO, it also offers service workers for offline capability - SVGO/SVGO-GUI- this NodeJS-based tool is extremely well done with a lot of options. I recommend using the GUI with it, though, because SVG export can change its appearance. - Peter Collingridge’s SVG Editor- I'm still a fan of this one even though it's not quite as fancy. I also like playing with the experimental editing tab. Be mindful of the toggles here. The ones that I find myself checking and unchecking the most are: - Clean IDs- this will remove any carefully named layers you may have. - Collapse useless groups- you might have grouped them to animate them all together, or just to keep things organized. - Merge paths- nine times out of ten this one is ok, but sometimes merging a lot of paths keeps you from being about to move elements in the DOM around independently. - Prettify- This is only necessary when you need to working within the SVG, for animation or other manipulation purposes. Finally, make sure you're gzipping your files (I usually do this as part of the overall build process), but it makes a huge difference in terms of SVG filesize. Use SVG Filters instead of Appearance Effects A few times while working with an SVG from another designer, we discovered that using the effects in the appearance panel, such as drop shadow, produced a monster base64 file that was cumbersome and expensive. This problem can be solved by using an SVG filter instead, available at Effect > SVG Filters and then choosing one from the dropdown. It's worth mentioning that these will be available to you only when the file is in .ai format, not once it's in .svg format (which is why I recommend always keeping the .ai source file). By swapping these out, not only did we improve the appearance of the SVG, but we decreased the filesize from a whopping 1.8MB to 1.2KB! Create a Large Background Shape When tracing an image, oftentimes you will be handed an image with pattern or multiple images “on top of” a background. But Illustrator will not understand these shapes as one large shape beneath a pattern or many other shapes - it will break the base color into the shapes between the pattern. Here is an example of an easy win because you can remove all of these shapes and replace it with one big background image. I find it easiest to trace around the whole containing unit first before removing anything. Remember to make this layer a different color from everything else. Many times the shapes behind can be grabbed all at once by using Select > Same > Fill Color (or Fill and Stroke). This allows you to grab many shapes at once and delete them all at once very quickly. Conclusion These aren't the only ways out there to work with SVGs for better performance, but the main key takeaway is that the less path data you have the better. Being mindful of what you're loading in your SVG files - double check the SVG DOM for cruft and remember to optimize. Going the extra mile designing for performance can shave vital seconds off of the page load of your site. I would guess pixel preview and then tweaking to line up on a pixel grid would help. Far less fractions and a good idea of how much detail is enough. Some great info here! A quick tip for manual tracing in Illustrator: make your sketch/source image a Template layer (go to Layer options or double click on the layer). This locks and dims it so you can put your path layers underneath and see what you’re doing. A plugin for Illustrator I found to be indispensable for optimizing SVGs – Vector Scribe (). It’s a bit pricey but the amount of time it saves me when working with SVGs for web was well worth it. This GIF is pretty wild: Wow, that’s pretty unreal! This is why I write most of my SVGs by hand. Most WYSIWYG editors will bloat things up real quick. Other points of bloat : paths that can very often be expressed as predefined shape elements; path points are frequently floating point numbers that have an absurd level of precision (think 10-20 decimal places), Round people!; Many vector clients will export namespaces and attributes that are only meaningful to that client (like inkscape and sodipodi elements/attributes that can almost always be removed to save 20-30% of file size. And lastly, because it’s an easy trick worth mentioning when serving up SVGs directly as image files (versus embedding them in html directly or as data URL format) : XML based content gzip compresses like a mfing boss. There’s even a designated file extension for compressed svg files that most browsers will handle without having to fuss with HTTP headers or Apache settings : svgz. Use this, and that awesome drop from 1.8MB to 1.2KB drops to .5KB (note: I mention compression last because it’s an easy fix, but should be the final step in optimization after actually cleaning up the svg content itself. Oh, another pet peeve: don’t define text elements and draw the text on top it as well. 99 times out of 100, the text element alone will be fine and save a huge amount of effort drawing letters that your client can already handle via these things called fonts. Gzipping is certainly where it’s at and I should have mentioned that- I was writing from the point of assumption that people are already doing this as part of their build process but that’s sometimes not the case. In terms of the text- I have seen a lot of really funky cross browser inconsistencies with SVG text (I’m looking at you, Safari), so I tend to take that on a case-by case basis, after doing a lot of checks. The rounding is part of the export process in Illustrator, as well as what SVGOMG is good at, so I didn’t get too granular in my explanation of that, but a good thing to point out nonetheless. Thanks for all the good info! You might be interested in trying Boxy SVG editor which generates much cleaner SVG files than Inkscape. It does support opening and saving SVGZ files and allows you to configure geometry precision. It uses proprietary bx: namespace, but only when absolutely necessary to preserve editing capabilities. Oh, and also: Can you share the final SVG of the spidertocat image? I’d love to look at the raw XML directly. Thanks Sarah for this information. Actually, it doesn’t. <defs>purpose is to place reference elements without rendering. Non-rendered elements like gradients can be totally safe placed anywhere. There is no need to add <defs>element just to place gradients and other non-rendered content. Regarding SVGO(MG): without Clean IDs it will not remove unused elements, since it judges whether element is being used on the presence of id. Clean IDs removes non-referenced IDs. In the whole the article is superb! Oh good call, I’ll update the article. Thanks! As of SVG 2, gradients can be safely placed almost everywhere, though browsers don’t support the SVG 2 content model yet which means you can’t put them inside e.g. shape elements. Wonderful article. Illustrator is the way to go for me, too. Sometimes I use Sketch but I’m more of a PC guy. Moreover, Illustrator works better for the web as far as I’m concerned. How i use external svg sprite with JavaScript accessibility. Great tips, Thanks a lot. What about using symbols for repeated elements? Does this provide any performance gains? It definitely makes working with the resulting code a lot simpler. You know, a while ago I did a perf benchmark for SVG with symbols and didn’t see any significant performance gains, but that was over two years ago and specifically for animation, so I could stand to rerun that and do a more official test. Or, you’re welcome to as well. One commenter on CSS-Tricks saw a big perf gain between rendering in symbol over rendering in React, if that helps in the meantime.
https://css-tricks.com/high-performance-svgs/
CC-MAIN-2017-22
refinedweb
2,398
69.41
Step it down it’s a straightforward process. I’ve written a detailed tutorial for those who’d rather not have to figure all this out from scratch. Read on for the full 25 step guide to adding ads and publishing your Android app to market. - Step 1 is, of course, to actually write an Android app. Check out my YouTube channel for a nice video to help you along with that. - Write out a nice description for your app in a word processor. No spelling mistakes! You’ll paste this description all over, so make sure it’s good. - Register a new android app on admob.com. You’ll need the description you wrote above. - On the AdMob site, set admob.com->app name->app settings->google adsense to TRUE. I also like to set a 60 second ad refresh timeout. - Before exiting AdMob, copy out the publisher ID for this particular Android app. It’ll be a long hash string on the top of the page when you click on the program details in your dashboard. - Open Eclipse (or your editor of choice), and open up your Android project directory. - Make sure your Manifest file has your correct project name i.e. com.hunterdavis.easycatwhistle. One accepted general format is com.publisher.program. - Ensure the Manifest file has correct permissions for and file or hardware access and specifies a minimum api level (I generally use 7 or above) - The ads will be dynamic, so also ensure your Manifest has network access permissions - Add your AdMob activity line into your Manifest file. It will be of the form `` - In your main.xml file (and any other layouts you wish to have ads in), paste in your adsense xlns namespace. It’ll look like: xmlns:ads="" - In your main.xml file (and any other layouts you wish to have ads in), paste in your adsense resource id and widget code. It’ll look like: `` - In your activities java file, add an ad network request during your activities oncreate function. It’ll look like AdView adView = (AdView) this.findViewById(R.id.adView);<br></br> adView.loadAd(new AdRequest()); - Open up your favorite editor and create a 512×512 PNG hires image. Draw what you like, as it’ll be the icon for your program. I like to save it into a “deploy” folder as icon.png - Now you need to generate the 3 levels of icon files Android requires. You can either do this manually, or have do it for you. Save each of these 3 icons generated into the res folder (high medium and low), replacing the default icon.png files that get generated by the SDK. - Copy the description of your app from above into a blog post or permalink page. You can use this as the website link during the publishing step. - Execute your app in a high-res emulator (like the 800×600 one that comes with the SDK), and take two Screenshots of your app. (Camera button on device menu in Eclipse) - Upload these screenshots to the blog post or permalink page you created earlier. - Use the Android tools menu in Eclipse (or the SDK command line tools, etc) to generate a signed application package. It’ll require that you type in your publisher name and location, as well as set a password for your encryption. - *Optional – Test your signed application package on a real live device. I use swiftp on the Android, filezilla on the PC, and easy file manager on the Android. Upload the package to somewhere you can browse to it on your Android device over ftp (or USB or whatever) and ensure your program looks good with ads and rotation etc. - Upload the signed package, icon, screenshots, and fancy description to the Android market and click publish. - Find your Market page, it’ll be of the form market.android.com/details?id=com.YOURPUBLISHER.YOURAPPNAME - Link to your market page from your blog post or permalink page. - Post your market page, blog post, or permalink page to Facebook, Reddit, Hacker News, Google+, what have you! - Profit!
http://www.hunterdavis.com/2011/08/04/step-by-step-how-to-publish-and-profit-from-your-android-application.html
CC-MAIN-2019-04
refinedweb
684
73.98
Unformatted text preview: EE 2361 - Introduction to M icrocontrollers Laboratory # 4a Bit Banging an RGB LED in Assembly Page 1 EE 2361 - Lab # 4a ECE Department Background Serial communications protocols are a fundamental component of operating microcontrollers in today’s connected devices. Serial communication allows for complex information to pass between devices over 4, 3, 2 or even just 1 wire. Allowing everything from displays to sensors to function. These protocols are so common and important, that they often have custom digital logic, referred to as peripherals embedded on the MCU to improve efficiency. In EE1301, we commonly used the “Serial” connection to output data to a computer over USB. In later labs, we will learn how to use these optimized digital circuits to communicate over UART, I2C, or SPI. However on the cutting edge of development in custom systems, electrical engineers often need to develop software code to implement serial communication protocols from scratch. In our case the Individually Addressable RGB LED (iLED), introduced in EE1301, has no such dedicated digital support circuits. In this lab, we will create our own implementation of a custom serial communication standard. Purpose In this lab you’ll familiarize yourself with the basic assembly instructions for changing the state of the GPIO pins on your PIC24 microcontroller. You’ll explore the timing implications of various instructions and their impact on the resulting output waveform. You’ll combine these skills to change the color of an individually addressable RGB LED (iLED). In the next lab, you’ll package up your assembly instructions into a C-library so that you can create sparkly light shows quickly and easily! Supplemental Resources PIC24FJ64GA004 - Family Datasheet ● Section 10.1 Parallel I/O (PIO) Ports 16-bit MCU and DSC Programmer's Reference Manual ● REPEAT, CALL, and RETURN Functions iLED from Adafruit References: ● WS2812 Datasheet ● Device Description - Individually Addressable LEDs (from EE1301) ● EE1301 - IoT Lab #2 - Section Individually Addressable LEDs Required Components: Standard PIC24 requirements (caps, pullup, debug header, etc.) Page 2 EE 2361 - Lab # 4a ECE Department iLED 100 Ohm Resistor Standard LED or LED strip 220 Ohm Resistor Pre-Lab Creating an assembly project in MPLAB X As discussed in Lab 1, you will need to create a new project for this lab. Here is a brief outline of the steps: 1.) In MPLAB X, click on “File” → “New Project…” 2.) Select Microchip Embedded → Standalone Project 3.) Select 16-bit MCU (PIC24) and PIC24FJ64GA002 4.) Leave the Debug Header set to “None” 5.) Select your Tool as “Simulator” 6.) Select your Compiler as “XC16” 7.) Finally, give your project a descriptive name! (for example “x500_labX_vXXX” → “orser_lab2_v001”) 8.) Click Finish Add boilerplate code for assembly language project The boilerplate code for an assembly-based project is slightly different than the C-based project we did in Lab 2. First, create a new assembly language file. 1.) In the “Projects” tab of MPLAB X, right-click on the “Source Files” directory and select “New” → “AssemblyFile.s” Careful! There are three options that look similar. Page 3 EE 2361 - Lab # 4 # 4a ECE Department 4.) In the editor enter the following “boilerplate” text into your new source file: .equ __P24FJ64GA002,1 ; required "boiler-plate" (BP) .include "p24Fxxxx.inc" ; BP #include "xc.inc" & COE_ON & BKBUG_ON & GWRP_ON & GCP_ON & JTAGEN_OFF .bss counter: .space stack: .space ; put the following labels in RAM 2 ; a variable that takes two bytes (we won’t use ; i t for now, but put here to make this a generic ; t emplate to be used later). 32 ; this will be our stack area, needed for func calls .text ; BP (put the following data in ROM(program memory)) ;because we are using the C c ompiler to assemble our code, we need a "_main" label ;somewhere. (There's a link s tep that looks for it.) .global _main ; BP _main: b clr CLKDIV,#8 ;BP n op ; ; --- B egin your p rogram b elow h ere ---. Page 5 EE 2361 - Lab # 4a ECE Department b.) Right-Click on Lab 1 project, click Properties, select Simulator as the target c.) Add a breakpoint on the line “AD1PCFG = 0x9fff;” d.) Click Debug Main Project ( this: ), if all goes well it should look something like Example of active breakpoint in Lab1 e.) Click on the menu item Window → Debugging → Disassembly, it should look something like this: Example of Disassembled Code You can see the C-code gets directly converted into two assembly instructions. The first loads a register (w0) with an immediate value (0x9FFF). The second copies the contents of the register (w0) into the memory location of the register (AD1PCFG). Page 6 EE 2361 - Lab # 4a ECE Department 3.) Now mimic this to set AD1PCFG, TRISA, and LATA to the following values: a.) Set AD1PCFG to 0x9FFF // This sets all pins to d igital mode b.) Set TRISA to 0b1111111111111110 // This sets the RA0 pin to o utput mode and the rest of the PORTA pins to input (as a matter of fact, the chip that you work with has only 5 bits on PORT A, but the microcontroller doesn’t mind us setting all those 16 bits in TRISA. It will ignore bits 5..15). c.) Set LATA to 0x0001 // This sets RA0 to output a logic h igh mov mov mov mov mov mov #0x9fff,w0 w0,AD1PCFG ; Set all pins to digital mode #0b1111111111111110,w0 w0,TRISA ; set pin RA0 to output #0x0001,w0 w0,LATA ; set pin RA0 high 4.) It is “bad form” to let your program counter continue to execute past the end of your program. Add a forever-do-nothing loop: foreverLoop: nop bra foreverLoop nop .end ; this doesn’t a ctually end anything. Does n ot t ranslate t o assembly ; code. Just a w ay to tell the compiler we a re d one with t his file. Example of a forever-do-nothing loop NOTE: Don’t forget to set Lab 2 as your main project again! This is a very common mistake while working in MPLAB X. 5.) Right click on Lab Project, click “Set as Main Project”, or, if you don’t see the project on the left, open it. 6.) Now compile your code, click , clean up any errors that result. 7.) Bring u p your logic analyzer and add the signal RA0 8.) If you click ( ) to simulate your code, your code runs forever because there are no breakpoints that stop the program. Try it! 9.) Click the Pause button ( ). It should look something like these screenshots: Page 7 EE 2361 - Lab # 4a ECE Department Example of paused Simulation Example of RA0 output on Logic Analyzer 10) Now press the “stop” button to stop simulating the program. Add a breakpoint on the line that says “mov w0,LATA”. Press the Debug Project button ( ) again, and when the program stops at the breakpoint (make sure the “Logic Analyzer” window is up), press F8 (Step Over ) a couple of times. You will notice that RA0 makes a transition to 1 in the logic analyzer window. Page 8 EE 2361 - Lab # 4a ECE Department Toggle RA0 In this section of the lab you will toggle RA0 and learn to carefully control the duration of the high/low logic pulses. When properly configured our PIC24 operates on a 32 MHz internal Clock, which results in a 16 MIPS (million instructions per second). This is called Fcy=16MHz in the PIC manual. This means most instructions take 1/16MHz = 62.5ns to execute. We can use a combination of two programming methods to create blocking delays of precisely N times 62.5ns. NOP Delay Method The most basic instruction for this purpose are “nop”. Which does exactly what it says “No Operation” for 62.5ns. For example the following code produces the following output. Example of toggling RA0 Notice how the first pulse is narrower than the subsequent pulses? The first rising edge on RA0 happened because of the “mov w0, LATA”, before the “foreverLoop” label. There were only two nop instructions between that instruction and the clr instruction, which results in a falling edge on RA0. However, when we are inside the loop, there are three instructions between the instruction causing a rising edge (inc LATA), and the falling edge (clr LATA). The branch instruction (bra foreverLoop) takes two instruction cycles, resulting in a total of 5 instruction cycles for the high part of the signal as opposed to only 3 during the low-time of the signal. Page 9 EE 2361 - Lab # 4a ECE Department REPEAT Delay Method The second method is excellent for creating precise delays of longer duration. The REPEAT instruction can be found in the 16-bit PIC Programmers Reference Manual (page 355 as of the current printing.) Basically the REPEAT #N command repeats the next instruction N+1 times, where N is the argument of REPEAT, a #l4lit (a 14-bit integer number between 0 and 16383). Take a look at the following example code and simulation: Example use of REPEAT with comments REPEAT can create precise delays of between (3)*62.5ns = 187.5ns and (16385)*62.5ns = 1,024,062.5 ns (slightly more than 1 ms). This makes it a fantastic tool for precise delays less than or equal to 1ms. The code is readable and takes a minimum of program memory space. There are several instructions that cannot follow a REPEAT (including the CALL instruction discussed in the next section) see the 16-bit PIC Programmers Reference Manual for a complete list. Page 10 EE 2361 - Lab # 4a ECE Department CALL and RETURN instructions Function calls (often referred to as subroutines in the PIC documentation) are executed in assembly via the CALL and RETURN instructions. These instructions take several cycles each to store/retrieve important registers, update the program counter, and flush the instruction fetch+decode stage. They are both defined in detail in the 16-bit PIC Programmers Reference Manual. We’ll briefly review them here, but if you need further information p lease read the manual before approaching your TA or instructor. The CALL instruction requires one argument, the address of the subroutine to be called (in our case actually a label.) CALL takes 2 cycles to execute. The RETURN instruction doesn’t require any arguments. RETURN takes 3 cycles. These extra cycles need to be accounted for when building precise timing code. In assembly, a function is simply a label with a RETURN instruction at the end. wait_10cycles: repeat #3 nop return ; ; ; ; 2 cycles for function call cycle to load and prep 1 3+1 cycles to execute NOP 4 times 3 cycles for the return Example of a function in assembly You call this function with the CALL instruction. foreverLoop: call w ait_10cycles ; 1 0 cycles c lr L ATA ; s et pin RA0 low = 1 cycle nop nop repeat nop Inc bra #8 LATA foreverLoop ; ; ; ; ; ; 2 cycles to match BRA delay 1 cycle to load and prep 8+1 cycles to execute NOP 9 times set pin RA0 high = 1 cycle Total = 12 cycles high, 12 cycles low Example of a function call in assembly In the pre-lab deliverables section below you will have to modify this above example to make the pulse generation its own function. HINT: It is possible to create wait functions for each pulse width (high and low). However, if you never reuse the function, it's a waste of time and energy to write it. Instead, you should write the REPEAT/NOP instruction in-line. Page 11 EE 2361 - Lab # 4a ECE Department Pre-Lab Deliverables For the Week 1 Pre-Lab, on your own, you need to create several pieces of complete assembly code. These will be used in lab! 1.) Create an all assembly function and calling assembly program a.) The assembly function should be of the form (pseudocode): void write_bit_stream(void) {} (NOTE: Remember your function should be in assembly, where inputs and outputs are passed by either the “stack” or registers. In this case there are neither.) b.) The function should generate a 24-cycle high, 32-cycle low pulse 2.) Place your function within your program within a forever loop (HINT: you’ll have to add a couple NOPs to account for the forever loop BRA instruction and trim a couple cycles from the high/low delay in the function.) The waveform is shown in the diagram below. Be prepared to demonstrate this program to your TA at the start of lab (ie., hit simulate and view the logic analyzer.) Request Output Pulse Train Specification for Pre-Lab Week 1 3.) Create four additional subroutines using REPEAT to provide e xact delays: ● Delay 1us ● Delay 10us ● Delay 100us ● Delay 1ms Pre-Lab Checklist ❏ ❏ ❏ ❏ Take pre-lab quiz on LATx, PORTx, ADxPCFG Complete the walk-through to create... View Full Document
https://www.coursehero.com/file/26543917/EE2361-Lab4a-RGB-LEDpdf/
CC-MAIN-2021-49
refinedweb
2,142
62.38
On a technical level, stagnant. It only has a few more years to run before it's eclipsed by other platforms. Compiling is generally only slow when there is a PC issue. I compile a pretty big sketch, 22Kbyte (& growing, for a '1284P talking to a GPS/GSM module, temperature reading chips, shift registers, etc.), in just a few seconds.The PC issue in the past has been traced Bluetooth stuff running on the PC. Try searching and turning stuff off.What other user friendly IDE will replace the Arduino? It goes thru some development rough patches from time to time - between 1.0.6 and 1.6.5r2 were problematic, and then again those leading up to 1.6.9 had some issues. (-0023 was good for a while too, then 1.0 to 1.0.5 had issues.) Plenty of folks willing to come up with the add-ins needed to support chips that are not part of the official family of supported chips. #include <SPI.h>#define nop asm volatile ("nop")byte array[] = {0,1,2,3,4,);byte ssPin = 10;void setup(){ pinMode ssPin (OUTPUT); SPI.setClockDivider(SPI_CLOCK_DIV2 ); // 8 MHz rate SPI.begin();}void loop(){PORTB = PORTB & 0b11111011; // low on D10SPDR = array[0]; nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;SPDR = array[1]; nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;SPDR = array[2]; nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;SPDR = array[3]; nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;SPDR = array[4]; nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;PORTB = PORTB | 0b00000100; // high on D10} I noticed another poster complained about how long it takes to compile a new sketch (close to a minute). Sketch uses 28,024 bytes (86%) of program storage space. Maximum is 32,256 bytes. I've actually never programmed on the bare metal. int main () { } Sketch uses 138 bytes (0%) of program storage space. Maximum is 32,256 bytes. Out of curiosity, since you've written so much, have you ever thought about learning embedded programming on the metal? It only has a few more years to run Heh. People have been predicting the demise of both C and 8-bit microcontrollers for decades, now... I do believe C will eventually die. Maybe not in my lifetime.
https://forum.arduino.cc/index.php?topic=407722.msg2805561
CC-MAIN-2019-51
refinedweb
417
69.38
Abstract Jacl, Java Command Language, is a version of the Tcl [1] scripting language for the Java [2] environment. Jacl is designed to be a universal scripting language for Java: the Jacl interpreter is written completely in Java and can run on any Java Virtual Machine. Jacl can be used to create Web content or to control Java applications. This paper explains the need for Jacl as a scripting language for Java and discusses the implications of Jacl for both the Java and Tcl programming communities. It then describes how to use Jacl. It also explains the implementation of the Jacl interpreter and how to write Tcl extensions in Java. 1. Motivation One on-going question in the Tcl community is, how can Tcl exploit the popularity of Java and the World Wide Web. There are two projects that try to bring Tcl into the world of Java and WWW. The Tcl Plugin [3] allows the execution of Tcl scripts inside Web browsers. However, the Tcl Plugin runs only inside certain browsers (Navigator and Explorer), requires the user to install software on local machines and does not communicate well with Java. Tcl-Java [4] allows the evaluation of Tcl code in Java applications, but it requires native methods and thus cannot run inside most browsers. A Tcl implementation in Java will facilitate the creation of portable Tcl extensions [4]. Tcl is a portable scripting language. However, although Tcl provides some support for writing portable extensions, maintaining Tcl extensions written in C for multiple platforms is still a difficult task, especially if network or graphics programming is involved. Currently Tcl runs on more platforms than Java. However, due to the large number of commercial Java developers, Java will probably catch up in the near future and run on more platforms. If Tcl implementations can be written in Java, the Tcl community can leave the portability issues to JavaSoft and other Java implementers and concentrate on developing the Tcl core interpreter and extensions. On the other hand, Java needs a scripting language as powerful as Tcl. Java is a structured programming language and is not a good scripting or command language [7]. Currently, scripting languages that can be used on Java platforms, such as Javascript and VBScript, are proprietary, non-portable and restrictive. Javascript and VBScript run only on the browsers that support them. Their scripting engines are system-dependent and cannot run on arbitrary Java Virtual Machines. These languages are good for scripting HTML pages, but they lack the features that would allow their deployment at any larger scale. For example, Javascript cannot define new classes; Java applets cannot directly pass events to VBScript [5, pp. 843]. Moreover, these scripting languages are not embeddable and thus cannot be used to control Java applications. Jacl is a comprehensive solution to the problem of Tcl and Java integration. Since the Jacl interpreter and extensions are written completely in Java, they can run inside any JVM, making Tcl an embeddable, universal scripting language for Java. By using the Jacl interpreter, Java programmers can use Tcl to control simple Web pages, complex networked Java applications, and anything in between. Using Jacl to Script Java Applications and Applets Java applications and applets and very similar to each other. The following section concentrates on applets only but the discussion holds true for Java applications as well. button .b1 button .b2 button .b3 .b1 config -text " ..............fastest............. " .b2 config -text " ..............faster.............. " .b3 config -text " ...........not so fast............ " pack .b1 .b2 .b3 proc scroll {btn time} { set str [$btn cget -text] set str [string range $str 1 end][string index $str 0] $btn config -text $str after $time scroll $btn $time } scroll .b1 100 scroll .b2 200 scroll .b3 500 The Java classes that implement Jacl are in the cornell.* hierarchy. The conell.applet.Shell class can be used to execute Jacl scripts inside applets. The following HTML code shows how to embed an Jacl-enabled applet inside an HTML page: <applet width=300 height=100> code=cornell.applet.Shell.class> <param NAME="jacl.script" VALUE="buttons.tcl"> </applet> When the cornell.applet.Shell class starts up, it will create a Jacl interpreter to execute the script file specified by the jacl.script parameter. Using Tcl and Tk Commands Jacl supports all the basic Tcl commands (e.g., string and puts, as well as the control constructs such as if and for.) It also supports a subset of the Tk commands for building graphical interfaces. Example 2.1 shows a script that performs a simple animation by scrolling text across three buttons at different speed. This script should look familiar to experienced Tcl/Tk programmers because its syntax is exactly the same as traditional Tcl/Tk programs. Figure 2.2 shows how the applet appears inside Netscape. Accessing Java Classes with Raw Scripting There are two ways for Jacl scripts to access Java classes: Raw Scripting and Custom Scripting. Raw scripting uses the Java Reflection API [8] to directly create Java classes and invoke their methods and fields. The following example shows how an applet can use raw scripting to manipulate a java.lang.Date object: set date [new java.lang.Date] button .day -text [$date getDay] The new command is used to create an instance of a Java class with the given name (in this case, java.lang.Date.) The new command returns an object command, which can be used to invoke the methods of the object. In the above example, the getDay method of the object is called to query the current day of the week on the system. The object command supports two special options, get and set, to query and modify the fields of the Java object. In the following example, we create an object of the java.util.Vector class, add several elements and the query the elementCount field to determine the number of elements in the vector object: set vector [new java.lang.Vector] $vector addElement ìstring1î $vector addElement ìstring2î set num [$vector get elementCount] When the methods and fields of the Java objects are invoked, Jacl will coerce the parameters when necessary. For example, in the following code segment, the parameters passed to the setSize method of the frame object may be represented as strings in the script. Jacl will convert them into integers before invoking the setSize method: set frame [new java.awt.Frame] $frame setSize 100 200 Jacl uses a set of heuristics to disambiguate the invocation of overloaded methods. For example, if we have a Java class with an overloaded method foo that can take either an integer or a string parameter: class A { void foo(int i); void foo(String s); } and we manipulate this class with the following script: set obj [new A] $obj foo 1 $obj foo abcd The first call to foo will invoke the integer version because the parameter looks like an integer. In contrast, the second call will invoke the string version because it is not possible to convert abcd to an integer. In the cases where the disambiguation heuristics are insufficient, one can use method signatures to choose which version of an overloaded method should be called. A method signature specifies the name and argument types of a method. For example, the following code forces the string version of the foo method to be called even though the argument looks like an integer: $obj {foo String} 1 2.4 Custom Scripting In custom scripting, Jacl scripts access Java objects through a scripting API provided by a Jacl extension (see section 4 for a discussion on writing Jacl extensions.) The button command in section 2.2 is an example of a custom scripting API for accessing java.awt.Button objects. One can access Java objects through raw scripting or custom scripting. Figure 2.3 shows the differences between raw- and custom scripting and compares the scripting code with Java code. As shown in figure 2.3, both raw- and custom scripting provides interactive access to Java classes. Custom scripting has the advantage of supporting a more convenient syntax but it requires the writing of Jacl extensions. Therefore, raw scripting is generally used to gain ìquick and dirtyî access to Java objects. When it is necessary to have better scripting support for Java objects, Jacl extensions can be written to provide a custom scripting API. 3. Implementation of the Jacl Interpreter The Jacl interpreter is based on the Tcl 7.6 interpreter. Most of the parsing routines for Tcl scripts and expressions are translations of the Tcl 7.6 C source code into Java code. Therefore, the Jacl interpreter is compatible with the Tcl 7.6 interpreter. In fact, the Tcl 7.6 test suite is used to ensure that Jacl parses and executes scripts in exactly the same manner as Tcl 7.6. Color c = new Color(255, 255, 0); b.setForeground(c) add(b); set c [new Color 255 255 0] $b setForeground $c $applet add $b pack .b There are two major enhancements in Jacl with respect to Tcl 7.6: object support and exception handling. These enhancements improve efficiency and simplify the implementation of the Jacl interpreter and extensions. 3.1 Object Support In Tcl 7.6, all objects are represented by strings. In Jacl, however, an object can be represented by any Java object. For example, in the following code: set a 1234 incr a After the first line, the variable a will contain a string ì1234î. At the second line, the incr command will coerce the string into an integer and then increment its value by one. After this operation, the variable a will contain a integer with the value 1235. Moreover, lists in Jacl are implemented as copy-on-write Vector objects to improve both access time and storage efficiency. In the following code set list1 [list 1 2 ... n] set c [lindex $list 3] set list2 $list1 ... lappend list2 abc the lindex operation takes constant time, compared to the O(n) time in Tcl 7.6. Also, after the set list2 $list1 command, the two variables list1 and list2 will refer to the same object. The contents of the list will be copied into the list2 variable only when a destructive operation, such as lappend, is applied to that variable. 3.2. Exception Handling Another difference between Tcl 7.6 and Jacl is how they handle error conditions. Tcl 7.6 uses return code such as TCL_OK and TCL_ERROR to indicate the success or failure of script execution. The Tcl 7.6 C source code spends considerable efforts in checking the return code of functions. In contrast, Jacl uses the Java exception mechanism to handle runtime errors. Thus, the Jacl source code is less cumbersome than the Tcl 7.6 C source code. For example, inside the Tcl parser, where errors can happen in many sections of the code, the Jacl implementation uses about 30% fewer lines of code than the Tcl 7.6 implementation written in C. Figure 3.1 compares the coding style between Jacl and Tcl 7.6 4. Writing Jacl Extensions A Jacl extension is generally a collection of new Tcl commands. A Tcl command is a class that implements the Command interface. The command can be added to a Jacl interpreter by passing an instance of its class to the CreateCommand method. Example 4.1 shows how a print command can be defined int i = interp.GetInt(string); // exception is thrown if string // doesnít contain a valid integer int i; if (Tcl_GetInt(interp, string, &i) != TCL_OK) { return TCL_ERROR; } One interesting feature of Example 4.1 is the way arguments are passed to CmdProc, the command procedure. Because the arguments passed to a command may be Java objects of any type, it is no longer sufficient to pass the arguments as (int argc, char ** argv) in Tcl 7.6. Instead, Jacl passes the arguments in a CmdArgs object. The following code shows the interface of the CmdArgs class: public class CmdArgs { public int argc; public String argv(int index); public int intArg(int index); public double doubleArg(int index); .... public Object object(int index); A command can use the converter methods, such as argv, intArg and doubleArg, to convert the arguments into the required types. The command can also use the Java instanceof operator to directly infer type information of the arguments. In example 4.2, the index1 command verifies that it receives an non-empty Vector object as its first argument before returning the first element of this Vector. 5. Status and Future Directions As of this writing, the Tcl parser, expression evaluator and most basic Tcl commands have been implemented in Jacl. It also supports a subset of the Tk commands for creating graphical interfaces. Jacl is already being used to create simple applets to run inside browsers. It can also be used to control Java applications and applets with raw- and custom scripting. A beta release is expected to be available in the third or fourth quarter of this year. Many more features have been planned for Jacl, including built-in debugging, supports for multi-threading, and a byte-code compiler. To find out more about the new developments of Jacl, please visit the Jacl home page at home/ioi/Jacl. Acknowledgment I would like to thank Thomas Breuel and Anil Nair for providing valuable inputs during the early design stage of Jacl. Thomas sent me the basic design of the cornell.Tcl.Command interface, which I put into Jacl without much change. Scott Stanton and Jacob Levy were instrumental in the design of the raw scripting API. import cornell.Tcl.* class PrintCmd implements Command { Object CmdProc(Interp interp, CmdArgs ca) throws EvalException { if (ca.argc != 2) { throw new EvalException("wrong # args: should be \"" + ca.argv(0) + " string\""); } System.out.println(ca.argv(1)); return ""; } } .... // Create a new "print" command. interp.CreateCommand("print", new PrintCmd()); .... Bibliography [1] John Ousterhout, Tcl and the Tk Toolkit, Addison-Wesley, Massachusetts, 1994 [2] Ken Arnold, James Gosling, The Java Programming Language, Addison-Wesley, Massachusetts, 1996 [3] Jacob Levy , A Tcl/Tk Netscape Plugin, Proc. of the 1996 USENIX Tcl Workshop, Monterey, 1996. [4] Scott Stanton and Ken Corey, TclJava: Toward Portable Extensions, Proc. of the USENIX 1996 Tcl/Tk Workshop, Monterey, 1996. [5] Michael Morrison, et al., Java Unleashed, Sams.net Publishing, Indianapolis, 1997 [6] Brian Lewis, An On-the-fly Bytecode Compiler for Tcl. Proc. of the USENIX 1996 Tcl/Tk Workshop, Monterey, 1996. [7] John Ousterhout, Scripting: Higher Level Programming for the 21st Century,, 1997. [8] Sun Microsystems, Inc., JavaTM Core Reflection, API and Specification,, 1997.
http://static.usenix.org/publications/library/proceedings/tcl97/full_papers/lam/lam_html/lam.html
crawl-003
refinedweb
2,435
56.05
Anthony Baxter wrote: > >>> len(dir(__builtins__)) > 125 > > That's a _lot_ of stuff in one module. Even if you exclude the 40-odd > exceptions that are there, that's still a lot of guff in one big flat > namespace. Any plans to clean it up in Python 3.0? In my opinion, a lot of builtins could either be deleted or moved into a module: buffer complex -> to math open (use file) long abs -> to math apply -> use extended syntax compile -> to sys divmod -> to math execfile -> to sys filter, reduce -> to functional? intern -> ? map -> use list comp oct/hex -> to math? range/xrange (unify) round -> to math Gerrit. --. -- 1780 BC, Hammurabi, Code of Law -- PrePEP: Builtin path type Asperger's Syndrome - a personal approach:
https://mail.python.org/pipermail/python-list/2004-January/283916.html
CC-MAIN-2014-15
refinedweb
125
71.55
Prof. Dr. Frank Leymann / Olha Danylevych Institute of Architecture of Application Systems University of Stuttgart Loose Coupling and Message-based Integration WS 2012 Exercise 2 Date: 24.08.12 (13:00 – 14:30, Room 38.03 or 38.02) Task 2.1 – Guaranteed Delivery a) What does the term “guaranteed delivery” mean? b) What are the pros and cons of guaranteed delivery? Provide examples when guaranteed delivery would be necessary and when it would instead be "overkill" (i.e. a waste of resources/performance/etc.). c) How can guaranteed delivery be realized in JMS? Task 2.2 – JMS Header Information JMS messages have headers that can convey varied information. Many types of headers are specified in the JMS specifications, and custom ones may be added by particular vendor implementations. Describe (a) which values should be assigned to which header fields and (b) who may assign these header fields in order to: ● Deliver a message with high priority ● Specify the destination of a reply message ● Specify the timestamp of its creation ● Message is valid only within a certain time interval Task 2.3 – JMS Properties JMS does not provide any means to guarantee the security aspects of the message exchanges in terms of (1) sender authentication and (2) encryption of the message contents. How can JMS properties be used to enrich the messages with the data relevant to security? Assume the usage of login and passwords as well as encryption via public/private key. Define an appropriate protocol that satisfies the following requirements: 1. user authentication 2. establishing of a secure channel between the communicating parties Task 2.4 – JMS Message Types Assume a message structured as: ({companyName, stock price}+) which contains the current stock prices of several companies (identified by name) at a particular point in time. These data can be packed into different JMS message types, e.g. TextMessage, MapMessage. Provide examples of different formatting of the data for each of the various JMS message types. Task 2.5 – JMS Message Selectors A message selector allows the JMS consumer to filter undesired, incoming JMS messages before they are processed. A selector is defined as a boolean expression combining header fields and properties in a way reminiscent of the WHERE statement of an SQL query. Assume the following scenarios: 1. An insurance application uses a message queue to collect complains from its clients. The application needs to select from the queue all the messages that are come from chemists and physicists that work for the University of Stuttgart. 2. A producer wants to receive a message as soon as it is posted to his queue an order of at least 100 pieces of the article with the inventory number “SFW374556-02”. Define for each of the above scenarios a selector and describe which header fields and properties have to be included in the message. Task 2.6 – JMS Topics Define a hierarchy of JMS topics that describes the German stock market. The topic German stock market should include the indexes (DAX, MDAX, TECDAX,…) as well as different industrial sectors as (sub-)topics. The companies enlisted in the stock market can be represented in different sectors, e.g. a company can publish its messages over the Topic DAX, as well as over a topic “Diversified industrials”. In the scope of the JMS Topic hierarchy resulting from above, consider the following cases: ● BMW wants to publish messages about its stocks. To which (sub-)topics should those messages be published? ● A customer wants to receive the messages from every company listed in the DAX index, as well as from Puma (which is listed in MDAX). Which topics has the customer to subscribe to? Task 2.7 Chat Application 1 For this task you may use NetBeans IDE 7.0.1 Java EE Bundle (comes already with GlassFish Server Open Source Edition 3.1.1:) or any other tooling of your choice. Your task is to develop a chat application that run from command-line using JMS pub/sub API1. In a nutshell, the chat application receives the name of the user as launch parameter (i.e. as args[] in the method main). After its launch, the chat application reads the messages given input by the user from command line (System.in). Each message is terminated by a carriage return (i.e. the message is not processed by the chat application unless the user presses the “return” button). The messages are published on a predefined JMS topic. Incoming messages are pulled by the very same JMS topic, which the chat application must subscribe. All incoming and outgoing messages should be also displayed on the console in the form: [name of the originating user]: [message text] The following hints should help you while developing your chat application: 1 This task is based on the Chapter 2 from the book “Java Message Service” by Richard Monson and David A. Chappell Create the Connection Factory and Topic required for the chat application in your GlassFish using the management console. Create a NamingProperties.java (extends Hashtable) with the configurations of Connection Factory name, Topic name etc. Your application will need to obtain a JNDI connection to the JMS messaging server and look up all required JMS administered objects; create the corresponding JMS session objects as well as JMS publisher and subscriber (here you have to set a JMS message listener), start the JMS connection to the predefined topic. These steps could be done e.g. in the constructor of the class Chat. That Chat class should implement the interface the javax.jms.MessageListener and implement the corresponding onMessage method to react on the messages received from the topic. For reading the messages typed by the user on the command line, you may use the method readLine shown here: Create a method writeMessage(String text) to form a JMS message and publish it on the topic whenever user has typed a message on the console. Additional Task: modify your chat application to filter out the messages from specific users.
https://www.yumpu.com/en/document/view/22406972/loose-coupling-and-message-based-integration-ws-2012-iaas
CC-MAIN-2020-10
refinedweb
1,000
54.63
Add Embedded Scripting to Your C++ Application Scripting tends to come on late in the life of your application. About the time that the feature-set is approaching completeness and customers are starting to get happy, you get hit with a new requirement. Take the scenario when the boss marches in and tells you how many more copies the company could sell if only it could allow users to automate report writing themselves. Or perhaps after rewriting or tweaking reports for the fourth time this week, you have the epiphany yourself. Other times, you find that ordinary macro recording and playback tools fail to meet the needs for rigorous quality assurance testing. What you really want is a test-rig that can run your algorithm 100 or 1,000 times with minor variations to really give it a good shakedown. If you're a game developer, you may want to expose part of your engine to allow third-party developers or your own in-house crew to design new AI behaviors for the zombies. You get the idea: Nearly every application can benefit from a scriptable interface. The C Scripting Language (CSL) The C Scripting Language (CSL) is a well-structured and easy-to-learn script programming language available for Windows 95 thru Windows XP, OS/2, Linux, FreeBSD, and other variants of UNIX. CSL follows the C syntax style very closely so programmers accustomed to C, C++, or Java will be immediately comfortable. CSL can be used as a standalone interpreter or is embeddable into your C/C++ application. As an interpreter, you write the program with your favorite editor and run it directly like any shell script or run it through your Web server's CGI-BIN interface. The CSL C interface can be embedded with most C/C++ compilers; the C++ class library is available for use with Visual C++ 5.0 or later, IBM VisualAge, Borland C++ 5.x, and GCC. Standalone or embedded, CSL scripts have access to built-in libraries for strings, math, file I/O, asynchronous (serial) communications, regular expressions, registry and profiles, window management, and database. The database library (called "dax") enables high-performance tasks such as data import/export, schema setup scripts, and SQL via ORACLE, DB2, MySQL, and ODBC. CSL in Standalone Mode and Basic Syntax Investigating standalone mode is perhaps the quickest way to get your feet wet and convince yourself that the CSL language is both familiar and friendly. Take the following short script as your first exploration into CSL. It computes the average of several numbers passed on the command line: #loadLibrary 'ZcSysLib' main() { var sum, count=0; for (var i=2; i<sizeof(mainArgVals); i++) { sum = sum + mainArgVals[i]; count++; } sysLog(sum / count); } As you can see, it looks fairly predictable because the CSL language is very close to C. However, it presents the following major differences: - All variables are of type var, which can hold numbers or strings. - No goto's. - Exception handling by try/catch/throw, fully interoperable with C++. (Throw an exception in C++ and catch it in CSL or vice versa.) - Dynamic arrays are managed with a resize statement rather than realloc(). If you want to dig deeper into the language syntax, the CSL language reference is the best place to go. Running a CSL Script Inside Your App Take a look at the minimal amount of code needed for your application to embed CSL. By embed, I of course mean loading and executing arbitrary CSL scripts at runtime. You'll see more advanced usage later in this article, but you have to crawl before you can fly. Whether the script was explicitly loaded by the user choosing File | Open Script in your app, pulled from a database, or dynamically generated is irrelevant to this part of the exercise. The following example uses a static in-memory script just to make it crystal clear how the mechanism works: 1 #include <stdlib.h> 2 #include <stdio.h> 3 #include <ZCslApi.h> 4 5 static char* module = "Embed"; /* module name */ 6 static ZCslHandle csl; /* csl handle */ 7 static long errs; /* csl api return code */ 8 9 10 int main() 11 { 12 char buf[1024]; 13 long bufsize=sizeof buf; 14 15 errs = ZCslOpen(&csl,0); 16 ZCslGet(csl, "cslVersion", buf, &bufsize); 17 printf("Using csl version %s\n", buf); 18 19 printf("compile a script from memory\n"); 20 errs = ZCslLoadScriptMem( 21 csl, /* handle */ 22 module, /* module name */ 23 "#loadLibrary 'ZcSysLib'\n" /* the script */ 24 "" 25 "foo()\n" 26 "{\n" 27 " sysLog('current time is '+sysTime()); \n" 28 " sysSleep(3000); \n" 29 " sysLog('current dir is '+sysDirectory()); \n" 30 "}\n" 31 ); 32 33 printf("call 'foo' within compiled script\n"); 34 errs = ZCslCall(csl, module, "foo", 0, 0); 35 36 printf("closing csl\n"); 37 errs = ZCslClose(csl); 38 39 return 0; 40 } /* main */ Line #15 opens a handle to the CSL system, which you later close in line #37. This handle is the basis for all your CSL operations. The meat of this example is lines #21-31, which load a particular script into a module called "Embed". This script contains a single function called foo() that executes three "sys" functions and then exits. Although not terribly useful, the example gets the point across. The output appears below: D:\csl\VV>testcsl Using csl version 4.04 compile a script from memory call 'foo' within compiled script current time is 15:31:46 current dir is D:\csl\VV closing csl Calling a C Function from Your App Inside CSL I know what you're thinking: I haven't seen anything yet that I couldn't do with a system() call to a Perl script or something. This time around, the example exposes a C function from within your application, which is called from CSL. The C function is foo_factorial(), which computes N factorial as expected and is called csl_factorial() inside the CSL script (just to prove that it can be an independent namespace if needed): 1 #include <stdlib.h> 2 #include <stdio.h> 3 #include <ZCslApi.h> 4 5 static char* module = "Embed"; /* module name */ 6 static ZCslHandle csl; /* csl handle */ 7 static long errs; /* csl api return code */ 8 9 10 ZExportAPI(void) foo_factorial(ZCslHandle aCsl) 11 { 12 int ii, factorial; 13 double sum; 14 char buf[40], name[4]; 15 long bufsiz = sizeof buf; 16 17 bufsiz = sizeof(buf); 18 if ( ZCslGet(aCsl, "p1", buf, &bufsiz) ) 19 return; 20 if (!atoi(buf)) { 21 ZCslSetError(aCsl, "foo_factorial: Not a number", -1); 22 } 23 24 factorial = 1; 25 for (ii=1; ii < atoi(buf); ii++) 26 factorial *= ii; 27 28 sprintf(buf, "%d", factorial); 29 ZCslSetResult(aCsl, buf, -1); /* (2) */ 30 } 31 32 33 int main() 34 { 35 char buf[1024]; 36 long bufsize=sizeof buf; 37 38 errs = ZCslOpen(&csl,0); 39 ZCslGet(csl, "cslVersion", buf, &bufsize); 40 printf("Using csl version %s\n", buf); 41 42 errs = ZCslAddFunc(csl, module, "csl_factorial(const p1)", foo_factorial); 43 44 printf("compile a script from memory\n"); 45 errs = ZCslLoadScriptMem(csl, /* handle */ 46 module, /* module name */ 47 "#loadLibrary 'ZcSysLib'\n" /* the script */ 48 "" 49 "foo()\n" 50 "{\n" 51 " sysLog('current time is '+sysTime()); \n" 52 " var x;\n" 53 " x = csl_factorial(5); \n" 54 " sysLog('5! is '+ x); \n" 55 "}\n" 56 ); 57 58 printf("call 'foo' within compiled script\n"); 59 errs = ZCslCall(csl, module, "foo", 0, 0); 60 61 errs = ZCslClose(csl); 62 return 0; 63 } /* main */ First, look at the new CSL-callable function in lines #10-30. Every CSL-callable function will have this same signature: ZExportAPI(void) and one argument of type ZcslHandle(). Once inside the function, you can retrieve its actual arguments by name with ZcslGet() on line #18. You do your function execution stuff, which in this case computes the factorial, and then return the value to CSL with ZCslSetResult() in line #29. Back in the main() function, the new thing to notice is your dynamic installation of the CSL-callable function via ZCslAddFunc() on line #42. Again, you referred to the formal parameter "p1" by name back on line #18. You actually call csl_factorial() right in the middle of your in-memory script, as shown on line #53. Again, this need not have been an in-memory script; it could have been typed into an edit control in your application, loaded from a file, and so forth. When you run the program, you'll see the output of 5 factorial (!): Using csl version 4.04 compile a script from memory call 'foo' within compiled script current time is 16:29:08 5! is 24 You now also call this function from C using a related API: char *args[] = {"5"}; ZcslCall(csl, module, "csl_factorial", 1, args); Other Embedded Interpreters to Consider It's possible that this approach is backwards from what you want. If you want to turn your application inside out and make all of its guts callable from other scripting languages, such as Perl, Python, and Tcl/TK, what you want is SWIG (Software Interface Generator). Ch is an embeddable C/C++ interpreter for cross-platform scripting, shell programming, 2D/3D plotting, numerical computing, and embedded scripting. Ch works on Windows, Solaris, HP-UX, Linux (X86 and PPC), FreeBSD, and QNX. There's even a free add-in for scripting from Microsoft Excel. S-Lang library by John E. Davis is another C-like environment with facilities required by interactive applications such as display/screen management, keyboard input, keymaps, and so on. The most exciting feature of the library is the slang interpreter that you can easily embed into a program to make it extensible. EiC is a freely available C language interpreter in both source and binary form. EiC allows you to write C programs, and then "execute" them as if they were a script (like a Perl script or a shell script). EiC can be run in several different modes: (1) interactively, (2) non-interactively, (3) in scripting mode, and (4) embedded as an interpreter in other systems. Everyone Could Use Scripting CSL provides all the required functionality for plugging a scripting language into your app with minimal hassle. You can export functions from your application with relative ease to make them callable within CSL scripts. The CSL language will be instantly usable by any end-user with C or Java experience. It also contains a rich set of librariesincluding database accessthat this article hasn't even described. Once you try embedded scripting in your app, you and your end-users (or zombies) will wonder how you ever got along without<<
https://www.developer.com/net/cplus/article.php/3562621/Add-Embedded-Scripting-to-Your-C-Application.htm
CC-MAIN-2019-09
refinedweb
1,769
56.79
A Mongoose library for various I2C speaking ADCs from Texas Instruments: The most common are the ADS1115 and ADS1015 chips. The driver takes care of exposing the correct fuctionality based on which type is created. Differential measurements can be taken on all devices, but only ADS1x15 has multiple options. First, create a device using mgos_ads1x1x_create() by specifying the type of chip you're using. Take some measurements using mgos_ads1x1x_read(), and clean up the driver by using mgos_ads1x1x_destroy(). mgos_ads1x1x_set_fsr() is used to set the full scale range (FSR) of the ADC. Each chip supports ranges from 6.144 Volts down to 0.256 Volts. You can read the current FSR with mgos_ads1x1x_get_fsr(). mgos_ads1x1x_set_dr() is used to set the data rate of continuous measurements. The support differs between ADS101X (the 12-bit version, which is faster), and ADS111X (the 16-bit version, which is slower). You can read the current DR with mgos_ads1x1x_get_dr(). mgos_ads1x1x_read() starts a singleshot measurement on the given channel (which takes 1ms for ADS101X and 8ms for ADS111X), and returns a 16 bit signed value. The datasheet mentions that with input voltages around GND, a negative value might be returned (ie -2 rather than 0). mgos_ads1x1x_read_diff() starts a singleshot measurement of the differential voltage between two channels, typically Chan0 and Chan1. Several channel pairs are allowed, see the include file for details. Note, that this function is only available on ADS1X15 chips. #include "mgos.h" #include "mgos_config.h" #include "mgos_ads1x1x.h" void timer_cb(void *data) { struct mgos_ads1x1x *d = (struct mgos_ads1x1x *)data; int16_t res[4]; if (!d) return; for(int i=0; i<4; i++) { if (!mgos_ads1x1x_read(s_adc, i, &res[i])) { LOG(LL_ERROR, ("Could not read device")); return; } } LOG(LL_INFO, ("chan={%6d, %6d, %6d, %6d}", res[0], res[1], res[2], res[3])); } enum mgos_app_init_result mgos_app_init(void) { struct mgos_ads1x1x *d = NULL; if (!(d = mgos_ads1x1x_create(mgos_i2c_get_global(), 0x48, ADC_ADS1115))) { LOG(LL_ERROR, ("Could not create ADS1115")); return MGOS_APP_INIT_ERROR; }_ads1x1x *mgos_ads1x1x_create(struct mgos_i2c *i2c, uint8_t i2caddr, enum mgos_ads1x1x_type type); Initialize a ADS1X1X on the I2C bus i2cat address specified in i2caddrparameter (default ADS1X1X is on address 0x48). The device will be polled for validity, upon success a new struct mgos_ads1x1xis allocated and returned. If the device could not be found, NULL is returned. bool mgos_ads1x1x_destroy(struct mgos_ads1x1x **dev); Destroy the data structure associated with a ADS1X1X device. The reference to the pointer of the struct mgos_ads1x1xhas to be provided, and upon successful destruction, its associated memory will be freed and the pointer set to NULL and true will be returned. bool mgos_ads1x1x_set_fsr(struct mgos_ads1x1x *dev, enum mgos_ads1x1x_fsr fsr); bool mgos_ads1x1x_get_fsr(struct mgos_ads1x1x *dev, enum mgos_ads1x1x_fsr *fsr); Get or Set the Full Scale Range (FSR). All chips in the ADS1X1X family support the same settings. By default, 2.048V is used. Note: ADS1x13 does not support this, and always has an FSR of 2.048V. Returns true on success, false otherwise. bool mgos_ads1x1x_set_dr(struct mgos_ads1x1x *dev, enum mgos_ads1x1x_dr dr); bool mgos_ads1x1x_get_dr(struct mgos_ads1x1x *dev, enum mgos_ads1x1x_dr *dr); Get or Set the Data Rate (in Samples per Second). If the supplied drargument cannot be set on the chip, false is returned. Otherwise, the supplied dris set. By default, ADS101X sets 1600SPS, ADS111X sets 128SPS. Returns true on success, false otherwise. bool mgos_ads1x1x_read(struct mgos_ads1x1x *dev, uint8_t chan, int16_t *result); Read a channel from the ADC and return the read value in result. If the channel was invalid, or an error occurred, false is returned and the result cannot be relied upon. Returns true on success, false otherwise. bool mgos_ads1x1x_read_diff(struct mgos_ads1x1x *dev, uint8_t chanP, uint8_t chanN, int16_t *result); edit this docedit this doc Read a 2-channel differential from the ADC and return the read value in result. If the channel pair invalid, or an error occurred, false is returned and the result cannot be relied upon. Upon success, true is returned. Note: This is only available on ADS1X15 chips. Valid chanP/chanN pairs are : 0/1, 0/3, 1/3, 2/3. Returns true on success, false otherwise.
https://mongoose-os.com/docs/mongoose-os/api/drivers/ads1x1x-i2c.md
CC-MAIN-2022-05
refinedweb
663
66.54
. Putting It All Together We’ve covered Composer, and now we’ve done some Guzzle. Let’s put them together and begin to build our own codebase for our project outside of what we’ve pulled in from Composer-able components. Composer affords us the ability to specify a custom namespace to directory mapping in our autoload instructions. Commonly in PHP, you’ll find code of this sort in a “src” directory. Let’s make a “src” directory in the root of our project, and update our composer.json file to something more like this: { "require": { "guzzlehttp/guzzle": "4.1.*" }, "autoload": { "psr-4": { "Components5\\": "src/" } } } This tells Composer’s autoloader that the “src” directory maps to any class in the “Components5” namespace. We can begin to create new classes and directories within our “src” dir now and as long as they have that namespace, Composer should pick them up and make them available to us. That being said, we must perform a “composer update” at the command line before this change will take effect. Once we’ve updated composer within this project, let’s write a new class. I want to write the beginnings of a simple SDK between Cloud and Guzzle. To that end, let’s create a “CloudSDK” dir in our src dir, and a “Client” class within that. We’ll want this class to inherit from Guzzle’s Client class, so we’re going to need to use a special syntax within our “use” statement in order for PHP to allow us to have two classes of the same name. This is essentially formatted as “use X as Y; and this allows you to rename an external class within the scope of this file to anything else you like. Here it is in practice: Once we have this class, we can begin to write our SDK’s assumptions directly into it. We know our base path and base url, so let’s encode that as constants within our class. We also know that we are going to need to bake authentication into this class. \GuzzleHttp\Client’s constructor expects an array of configuration values, but we could probably simplify that drastically for our needs to simply $user and $pass. By doing this, we can then use the constructor to format the proper configuration values for a typical Guzzle Client, and we can pass that configuration to our parent’s constructor method. [$this::BASE_URL, ['version' => $this::BASE_PATH]], 'defaults' => [ 'auth' => [$user, $pass], ], ]; parent::__construct($config); } ?> This gives us a nice and tidy class that we can pass two parameters into and then make get() calls to Acquia’s Cloud API, but in order to really make this as simple as possible, let’s take it one step further and begin to create custom methods that map directly to Cloud API’s service endpoints. get('sites.json')->json(); } public function getSiteTasks($site) { return $this->get(['sites/{site}/tasks.json', ['site' => $site]])->json(); } ?> These are two simple methods that reduce the code in our index.php quite dramatically and serve as the beginnings of a sane SDK for working with CloudAPI. Our index.php can be simplified and reduced to this: ' . var_export($client->getSites(), TRUE) . ' '; // Get the tasks related to our first example site. echo ' ' . var_export($client->getSiteTasks('devcloud:example1'), TRUE) . ' '; ?> I hope you can see how powerful and robust Guzzle can be and why it’s been so widely adopted. Drupal 8 has adopted Guzzle as well, and if you’d like to begin playing with it more directly, you can install Drupal 8 on a new free tier instance and you’ll have Guzzle available to you immediately. If you’re already signed in to your Acquia account, setting up new instances is a snap.
https://www.acquia.com/es-preview/node/2016
CC-MAIN-2021-17
refinedweb
626
60.55
This doesn't work. It makes sense in my head but it doesn't compile. As usual I've probably done something stupid and am unable to see it. Any help is always appreciated. Code:#include <iostream> using namespace std; // Prototypes // Templates template < class T > T min( T value1, T value2 ) { T min = value1; if( value1 > value2 ) { min = value2; return min; } else return min; } int main() { int number, number1, choose; char charac, charac1; double doub, doub1; cout << "Number = 1\nDouble = 2\nCharacter = 3"; cin >> choose; switch (choose) { case 1: cout << "Enter a number: "; cin >> number; cout << "\nEnter a number: "; cin >> number1; cout << "\n\nSmallest is: " << min( number, number1 ); break; case 2: cout << "Enter a double: "; cin >> doub; cout << "\nEnter a double: "; cin >> doub1; cout << "\n\nSmallest is: " << min( doub, doub1 ); break; case 3: cout << "Enter a char: "; cin >> charac; cout << "\nEnter a char: "; cin >> chrac1; cout << "\n\nSmallest is: " << min( charac, charac1 ); break; } system( "pause" ); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/49394-my-first-attempt-template.html
CC-MAIN-2016-40
refinedweb
157
58.35
Despite being advertised as available on all platforms, in Win32 builds: #pragma intrinsic(ceil) generates warning C4163 in VS2012, VS2010 and VS2005, though floor does not. It does not generate a warning in x64 builds... Please wait... It should not be affecting your code generation, however. You do not actually need "#pragma intrinsic(ceil)" to get the right code generation. By that, I mean that this entire program (no #includes) builds and runs as expected: extern "C" double ceil(double); double x = 1.2; int main() { return (int)ceil(x); } I am closing this MSConnect item. Feel free to re-activate it if you need more info. Thanks! Eric Brumer - Microsoft Visual C++
https://connect.microsoft.com/VisualStudio/feedback/details/785651/warning-c4163-ceil-not-available-as-an-intrinsic-function-in-win32-c-compiler
CC-MAIN-2016-07
refinedweb
113
66.44
I am trying to run the sikuli code on pycharm I am getting the following error : Traceback (most recent call last): File "C:\Program Files\JetBrains from pydevconsole import do_exit, InterpreterInte File "C:\Program Files\JetBrains from _pydev_ File "C:\Program Files\JetBrains import threading File "C:\Users\ from _threading import Lock, RLock, Condition, _Lock, _RLock, _threads, _active, _jthread_ ImportError: cannot import name _threads I have the pycharm version 2017.3.4 Sikuli version 1.1.2 Running on windows 10 Question information - Language: - English Edit question - Status: - Answered - For: - Sikuli Edit question - Assignee: - No assignee Edit question - Last query: - 2018-03-20 - Last reply: - 2018-03-20 looks like a clash on sys.path Check your setup and the way you are running the script. https:/ /answers. launchpad. net/sikuli/ +question/ 666794 Carried out above steps but still unable to run the script
https://answers.launchpad.net/sikuli/+question/666794
CC-MAIN-2018-17
refinedweb
144
54.15
513C - Second price auction Solution: This is a good problem :) The official editorial to this round is very well done and you are encouraged to check them out! Apparently this problem can be solved using a pure mathematical approach, but I would like to describe the dynamic programming approach to solve this problem. In the editorial the author describes an \(O(N(R-L))\) solution to this problem, which requires some bounding on the states. But due to the small search space of N, I decided to write an \(O(N^3(R-L))\) solution because it is a bit more intuitive :) Lets define \(b\), the value of the second bid. Before we attempt the problem, we should firstly know that the expectation of b can be computed as \(\sum bP(b) \). Hence the problem boils down to computing P(b). Suppose we fix b to a value. Let's define the D(i,j,k) be the probability such that for companies [1..i], there j companies that placed bids exactly equals to b, and k companies that placed bids strictly more than b. This is the DP state that works for this problem :) Let prob[i] be the probability of company i to place each bid. To compute D(i,j,k), we have three cases: 1. The i-th company has placed a bid less than b. Then D(i,j,k) += D(i-1,j,k) * prob[i-1] * (b - L[i]). 2. The i-th company has placed a bid exactly equal to b. Then D(i,j,k) += D(i-1,j-1,k) * prob[i-1]. 3. The i-th company has placed a bid more than b. Then D(i,j,k) += D(i-1,j,k-1) * prob[i-1] * (R[i] - b). After computing all D(i,j,k), we can compute P(b) by summing up the following: 1. D(n, j, 1) for all j > 0 (because we need at least 1 company bidding b, and 1 company bidding more than b). 2. D(n, j, 0) for all j > 1 (because in case no company bids more than b, we need at least two companies bid exactly b). And finally we sum up all b * P(b) to get the expectation. Good problem. #include <cstdio> #include <algorithm> using namespace std; double dp[10][10][10]; int a[10][2]; int n; int main(){ scanf("%d",&n); for(int i=1;i<=n;++i){ scanf("%d%d",&a[i][0],&a[i][1]); } double expectation = 0; for(int b=1;b<=10000;++b){ for(int i=0;i<10;++i)for(int j=0;j<10;++j)for(int k=0;k<10;++k)dp[i][j][k]=0; dp[0][0][0] = 1.0; for(int i=1;i<=n;++i){ for(int j=0;j<=n;++j){ for(int k=0;k<=n;++k){ double prob = 1.0/(a[i][1]-a[i][0]+1); double temp = 0; //lower temp = dp[i-1][j][k] * (min(b-a[i][0], a[i][1]-a[i][0]+1)) * prob; if(temp < 0) temp = 0; dp[i][j][k] += temp; temp = 0; //equal if(a[i][0] <= b && b <= a[i][1]) if(j>0) dp[i][j][k] += dp[i-1][j-1][k] * prob; //higher if(k>0)temp = dp[i-1][j][k-1] * (min(a[i][1]-b, a[i][1]-a[i][0]+1)) * prob; if(temp < 0) temp = 0; dp[i][j][k] += temp; } } } for(int j=1;j<=n;++j){ expectation += 1.0 * b * dp[n][j][1]; if(j>1) expectation += 1.0 * b * dp[n][j][0]; } } printf("%.12lf\n",expectation); return 0; }
https://abitofcs.blogspot.com/2015/02/codeforces-rockethon-2015-problem-c.html
CC-MAIN-2018-13
refinedweb
624
72.66
I'm having a great deal of trouble trying build my API using the Django Rest Framework. I have been stuck on the same issue now for several days. I've tried numerous solutions and code-snippets and asked plenty of people but to no avail. I've tried to follow all the instructions in the docs, but to me they are unclear and incomplete. So I'm very desperate for a clear, concise, complete working example to solve me problem. Now here is my question: I have been successful at building a simple Django Rest API by following the instructions here. These instructions make it very easy to build an API that returns a list of all instances of a certain model, or a single instance based on a user-provided ID. So, since I have a model named MyObject, I have built an api that returns a list of all the myObjects when you hit the URL /api/myObjects. If I hit the URL /api/myObjects/60, it gives me the myObject with ID==60. So far so good! But I don't want to do this. I want something a bit more complex. The myObject model has a method called getCustomObjects(). This method itself returns a list of myObjects. When I hit the URL /api/myObjects/60, I want it to return the list produced by calling getCustomObjects() on the myObject with ID==60. This seemingly simple change is causing me a very major headache and I can't figure out how to do it. The reason is that because I want to return a non-standard list of objects, I cannot use the standard way of doing things with a ModelViewSet as described in the docs. When I make the changes that I think should work, I get errors. My current error is: base_name .model .queryset My Route looks like this in myApp's url.py: from rest_framework import routers router = routers.DefaultRouter() router.register(r'myObjects/(?P<id>\d+)/?$', views.MyObjectsViewSet) url(r'^api/', include(router.urls)), class MyObject(models.Model): name = models.TextField() class MyObjectSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = MyObject fields = ('id', 'name',)) `base_name` argument not specified, and could not automatically determine the name from the viewset, as it does not have a `.model` or `.queryset` attribute. base_name is used so the router can properly name the URLs. The DefaultRouter you are using uses the model or queryset attributes of the view set. But since you are using viewsets.ViewSet which has neither, the router cannot determine the base name used to name the generated URL patterns (eg, 'myobject-detail' or 'myobject-list') router.register(r'myObjects', views.MyObjectsViewSet, base_name='myobject') This will result in creating the following URL pattern: ^myObjects/{pk}/$ with the name: 'myobject-detail'. Notice the first param to router.register must be r'myObjects' not r'myObjects/(?P<id>\d+)/?$' because the router just needs the prefix and will take care of creating the patterns. To summarize this is an excerpt from DRF docs or queryset attribute on the viewset, if it has one. Note that if the viewset does not include a model or queryset attribute then you must set base_name when registering the viewset. See routers docs:
https://codedump.io/share/DqdK3L5Rtb76/1/how-to-build-a-django-rest-api-that-returns-a-custom-list-of-models
CC-MAIN-2016-50
refinedweb
540
66.54
Forum:Unsaiklopedia and Namespaces From Uncyclopedia, the content-free encyclopedia Forums: Index > Ministry of Love > Unsaiklopedia and Namespaces Note: This topic has been unedited for 3003 days. It is considered archived - the discussion is over. Do not add to unless it really needs a response.Hello, I'm trying to administrate Unsaiklopedia, and I'm wondering how to use Special:Namespaces to add the Forum and Forum talk namespaces, so it's an actual namespace. Thanks! --~ Jacques Pirat, Esq. Converse : Benefactions : U.w.p.21:29, 7 May 2007 (UTC) - As I understand it, the forum is a special add-on written by Algorithm. It can be found here. Sir Famine, Gun ♣ Petition » 05/7 21:52 - Thanks for the link. The forum extension would be nice for Unsaiklopedia, but I cannot install it since I have no access to FTP(It's probably already installed I'm guessing). Smiddle said something about needing to monkey around with Special:Namespaces to get it into being a real namespace - right now it's :Baurgs Hulundi for some reason. --~ Jacques Pirat, Esq. Converse : Benefactions : U.w.p.22:30, 7 May 2007 (UTC) - It looks like you already have the extensions. Go to got:special:namespaces, log on as an admin, click "Add a namespace prefix" to create 16: Forum and 17: Forum_talk, setting 'default' and 'canonical' both true. The namespaces are numbered, with 0-15 already used internally by MediaWiki. Talk pages are odd-numbered namespaces by convention. - You'll also need to copy whatever codes are on the Forum:Village Dump front page to the main page of your new forum, in order to tell the DPLforum to do its stuff. --Carlb 22:53, 7 May 2007 (UTC) - Thanks! Hopefully it will work... --~ Jacques Pirat, Esq. Converse : Benefactions : U.w.p.00:55, 8 May 2007 (UTC)
http://uncyclopedia.wikia.com/wiki/Forum:Unsaiklopedia_and_Namespaces
CC-MAIN-2015-32
refinedweb
307
59.4
User Details - User Since - Mar 31 2020, 3:33 AM (33 w, 6 d) Thu, Nov 12 Add the missing tokens to the documentation and add question mark parsing functions Wed, Nov 11 Add tests and rebase Fri, Nov 6 Mon, Oct 26 Oct 15 2020 This looks like the same as my old patch (). There are additional details available on the commit's Phab page. To put it in a nutshell, some codes can trigger a very large amount of calls to the aliasing check (see the repro provided by @nemanjai:), which results in a very large increase in compilation time. Jul 1 2020 Yes, sorry about blocking the revision. Note that there is still a clang-format issue in one of the files. Jun 26 2020 Jun 25 2020 I'm wondering how hard it would be to have a linter check for TableGen files that reports whether there are lines longer than 80 characters. This could be useful for such situations: I had to change some locations in OpBase.td that went unnoticed during earlier reviews (e.g. the constBuilderCall for I32EnumAttr). Make sure that OpBase.td is wrapped at 80 characters. Jun 24 2020 Not sure why clang-tidy complains about EnumsGenTest.h.inc here. Jun 23 2020 Yes, it's still in the mlir namespace. Moving it out of mlir would probably help, but I don't think there is enough coverage in it right now to have a meaningful impact. The issue fixed in this patch would not have been detected for example, since there are no custom passes in examples/standalone yet. Jun 18 2020 nit: You may want to update the title before landing it Jun 5 2020 Maybe you could move to a slightly different directory structure and follow what other dialects are doing. Using a top-level CMakeLists.txt like # lib/Dialect/Shape/CMakeLists.txt add_subdirectory(IR) and then add the dialect library generation directives in lib/Dialect/Shape/IR? I think it would make sense to follow the same conventions as, say, the Linalg dialect. Jun 2 2020 It looks like there are clang-format errors (). Can you format your changes using git-clang-format HEAD^ and update the diff? This would allow us to have a clean build. Jun 1 2020 Regarding the print and dump methods, I have no strong opinions. We may as well keep them as-is for now. Simplex.cpp is apparently still using // for top-level comments. Regarding the change to SmallVector instead of std::vector, it would probably make sense to use ArrayRef<T> for function parameters since it makes the API more flexible. If you need to mutate the parameter then I would suggest using SmallVectorImpl<T> instead of specifying the size of the internal buffer in the parameter type. May 31 2020 I added quite a lot of inline comments. To avoid redundancies for some of them, here are more general ones that apply to all the files: - Don't use Doxygen tags in your comments. - Use /// for top-level comments (e.g. outside of functions). - Prefer SmallVector over std::vector since the former can potentially be more efficient for small sizes. - Fraction and Matrix look like they are only ever used with int64_t elements. Is there a need for them to be template classes? May 28 2020 LGTM LGTM too, after adding the test case. Thanks! @ftynse This would suggest an update to this part of the docs then, right? May 22 2020 @nemanjai The changes were reverted by commit 65cd2c7a8015577fea15c861f41d2e4b5768961f. Sure. I didn't realize it would cause such an overhead. If I remember correctly, some MachineInstr instances sometimes have duplicate memory operands for the same load or store. May 21 2020 @efriedma Thanks for your feedback! I had to make some minor adjustments to ARM test cases. If it's still good for you, I will land the patch. Update FileCheck directives in llvm/test/CodeGen/ARM/cortex-a57-misched-vldm-wrback.ll llvm/test/CodeGen/ARM/cortex-a57-misched-vstm-wrback.ll llvm/test/CodeGen/ARM/cortex-a57-misched-vstm.ll to avoid matching the new instruction scheduler debug output. Add some debug output to the instruction sheduler to signal wether or not a chain dependency has been added between two given instructions. This change allows us to properly test the effect of handling multiple memory operands in alias queries on instruction scheduling. May 20 2020 Also propagate memory operands when folding non-MOV instructions. Add a test case. May 19 2020 Simplify the method's control flow by wrapping the old dependency checking code in a helper lambda and calling it for each pair of memory operands. May 18 2020 Update instruction ordering in failing test cases. Add a test case. Split the call frame optimization patch from the ISel DAG postprocessing patch. LGTM after fixing the use of curly braces. May 16 2020 Also propagate store memory operands during call frame optimization. May 14 2020 LGTM, thanks! May 5 2020 Apr 28 2020 nit: The commit message looks a little convoluted. Maybe you could simplify/structure it a little bit. Otherwise, LGTM. Apr 24 2020 It is just a cleanup. I ran into an issue related to this missing check in an out-of-tree project. Apr 21 2020 Apr 7 2020 I think it would make sense to propagate the changes to mlir-opt as well. Apr 6 2020 Apr 4 2020 Thanks for taking the time to comment on this. Apr 3 2020 Remove the using namespace directives in the standalone-opt source file. This follows standard coding guidelines and code styles of other out-of-tree MLIR projects. Minor update: remove unneeded braces at the end of the TableGen description for Standalone_Op.
http://reviews.llvm.org/p/Kayjukh/
CC-MAIN-2020-50
refinedweb
961
66.03
Free Chapter from "Scala in Depth": Using None instead of Null Free Chapter from "Scala in Depth": Using None instead of Null Join the DZone community and get the full member experience.Join For Free Joshua D. Suereth An Option can be considered a container of something or nothing. This is done through the two subclasses of Option: Some and None. In this article from chapter 2 of Scala in Depth, author Joshua Suereth discusses advanced Option techniques. Using None instead of Null Scala does its best to discourage the use of null in general programming. It does this through the scala.Option class found in the standard library. An Option can be considered a container of something or nothing. This is done through the two subclasses of Option: Some and None. Some denotes a container of exactly one item. None denotes an empty container, a role similar to what Nil plays for List. In Java and other languages that allow null, null is often used as a placeholder to denote a nonfatal error as a return value or to denote that a variable is not yet initialized. In Scala, you can denote this through the None subclass of Option. Conversely, you can denote an initialized or nonfatal variable state through the Some subclass of Option. Let’s take a look at the usage of these two classes. Listing 1 Simple usage of Some and None scala> var x : Option[String] = None #1 x: Option[String] = None scala> x.get #2 java.util.NoSuchElementException: None.get in scala> x.getOrElse("default") #3 res0: String = default scala> x = Some("Now Initialized") #4 x: Option[String] = Some(Now Initialized) scala> x.get #5 res0: java.lang.String = Now Initialized scala> x.getOrElse("default") #6 res1: java.lang.String = Now Initialized #1 Create uninitialized String variable #2 Access uninitialized throws exception #3 Access using default #4 Initialize x with a string #5 Access initialized variable works #6 Default is not used An Option containing no value can be constructed via the None object. An Option that contains a value is created via the Some factory method. Option provides many ways of retrieving values from its inside. Of particular use are the get and getOrElse methods. The get method will attempt to access the value stored in an Option and throw an exception if it is empty. This is very similar to accessing nullable values within other languages. The getOrElse method will attempt to access the value stored in an Option, if one exists, otherwise it will return the value supplied to the method. You should always prefer getOrElse over using get. Scala provides a factory method on the Object companion object that will convert from a Java style reference—where null implies an empty variable—into an Option where this is more explicit. Let’s take a quick look. Listing 2 Usage of the Option factory scala> var x : Option[String] = Option(null) x: Option[String] = None scala> x = Option("Initialized") x: Option[String] = Some(Initialized) The Option factory method will take a variable and create a None object if the input was null, or a Some if the input was initialized. This makes it rather easy to take inputs from an untrusted source, such as another JVM language, and wrap them into Options. You might be asking yourself why you would want to do this. Isn’t checking for null just as simple in code? Well, Option provides a few more advanced features that make it far more ideal then simply using if null checks. Advanced Option techniques The greatest feature of Option is that you can treat it as a Collection. This means you can perform the standard map, flat Map, and foreach methods and utilize them inside a for expression. Not only does this help to ensure a concise syntax, but opens up a variety of different methods for handling uninitialized values. Let’s take a look at some common Null-related issues solved using Option, starting with creating an object or returning a default. Creating a new object or returning a default Many times, you need to construct something with some other variable or supply some sort of default. Let’s pretend that we have an application that requires some kind of temporary file storage for its execution. The application is designed so that a user may be able to specify a directory to store temporary files on the command line. If the user does not specify a new file, if the argument provided by the user is not a real directory, or they did not provide a directory, then we want to return a sensible default temporary directory. Let’s create a method that will give our temporary directory. Listing 3 Creating an object or returning a default def getTemporaryDirectory(tmpArg : Option[String]) : java.io.File = { tmpArg.map(name => new java.io.File(name)). filter(_.isDirectory). getOrElse(new java.io.File(System.getProperty("java.io.tmpdir"))) } #1 Create if defined #2 Only directories #3 Specify Default The getTemporaryDirectory method takes the command line parameter as an Option containing a String and returns a File object referencing the temporary directory we should use. The first thing we do is use the map method on Option to create a java.io.File if there was a parameter. Next, we make sure that this newly constructed file object is a directory. To do that, we use the filter method. This will check whether the value in an Option abides by some predicate and, if not, convert to a None. Finally, we check to see if we have a value in the Option; otherwise, we return the default temporary directory. This enables a very powerful set of checks without resorting to nested if statements or blocks. There are times where we would like a block, such as when we want to execute a block of code based on the availability of a particular parameter. Executing block of code if variable is initialized Option can be used to execute a block of code if the Option contains a value. This is done through the for each method, which, as expected, iterates over all the elements in the Option. As an Option can only contain zero or one value, this means the block either executes or is ignored. This syntax works particularly well with for expressions. Let’s take a quick look. Listing 4 Executing code if option is defined val username : Option[String] = ... for(uname <- username) { println("User: " + uname) } As you can see, this looks like a normal "iterate over a collection" control block. The syntax remains quite similar when we need to iterate over several variables. Let’s look at the case where we have some kind of Java Servlet framework and we want to be able to authenticate users. If authentication is possible, we want to inject our security token into the HttpSession so that later filters and servlets can check access privileges for this user. Listing 5 Executing code if several options are defined def authenticateSession(session : HttpSession, username : Option[String], password : Option[Array[Char]]) = { for(u <- username; p <- password; if canAuthenticate(username, password)) { #1 val privileges = privilegesFor(u) #2 injectPrivilegesIntoSession(session, privileges) } } #1 Conditional logic #2 No need for Option Notice that you can embed conditional logic in a for expression. This helps keep less nested logical blocks within your program. Another important consideration is that all the helper methods do not need to use the Option class. Option works as a great front-line defense for potentially uninitialized variables; however, it does not need to pollute the rest of your code. In Scala, Option as an argument implies that something may not be initialized—its convention to make the opposite true, that is: functions should not be passed as null or uninitialized parameters. Scala’s for expression syntax is rather robust, even allowing you to produce values rather than execute code blocks. This is especially handy when you have a set of potentially uninitialized parameters that you want to transform into something else. Using several potential uninitialized variables to construct another variable Sometimes we want to transform a set of potentially uninitialized values so that we only have to deal with one. To do this, we’re going to use a for expression again, but this time using a yield. Let’s look at the case where a user has input some database credentials or we attempted to read them from an encrypted location and we want to create a database connection using these parameters. We don’t want to deal with failure in our function because this is a utility function that will not have access to the user. In this case, we’d like to just transform our database connection configuration parameters into a single option containing our database. Listing 6 Merging options def createConnection(conn_url : Option[String], conn_user : Option[String], conn_pw : Option[String]) : Option[Connection] = for { url <- conn_url user <- conn_user pw <- conn_pw } yield DriverManager.getConnection(url, user, pw) This function does exactly what we need it to. It does seem though that we are merely deferring all logic to DriverManager.getConnection. What if we wanted to abstract this such that we can take any function and create one that is option friendly in the same manner? Take a look at what we’ll call the lift function: Listing 7 Generically converting functions scala> def lift3[A,B,C,D](f : Function3[A,B,C,D]) : Function3[Option[A], Option[B], Option[C], Option[D]] = | (oa : Option[A], ob : Option[B], oc : Option[C]) => | for(a <- oa; b <- ob; c <- oc) yield f(a,b,c) | } lift3: [A,B,C,D](f: (A, B, C) => D)(Option[A], Option[B], Option[C]) => Option[D] scala> lift3(DriverManager.getConnection) #1 res4: (Option[java.lang.String], Option[java.lang.String], Option[java.lang.String]) #1 Using lift3 directly The lift 3 method looks somewhat like our earlier createConnection method, except that it takes a function as its sole parameter. As you can see from the REPL output, we can use this against existing functions to create option-friendly functions. We’ve directly taken the DriverManager.getConnection method and lifted it into something that is semantically equivalent to our earlier createConnection method. This technique works well when used with the encapsulation of uninitialized variables. You can write most of your code, even utility methods, assuming that everything is initialized, and then lift these functions into Option friendly as appropriate. Summary Scala provides a class called Option that allows developers to relax the amount of protection they need when dealing with null. Option can help to improve reasonability of the code by clearly delineated where uninitialized values are accepted. Here are some other Manning titles you might be interested in: Last updated: August 15, 2011 Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/free-chapter-scala-depth-using
CC-MAIN-2019-35
refinedweb
1,826
53.41
This is the second article in the series of python scripts. In this article we will see how to crawl all pages of a website and fetch all the emails. Important: Please note that some sites may not want you to crawl their site. Please honour their robot.txt file. In some cases it may lead to legal action. This article is only for educational purpose. Readers are requested not to misuse it. Instead. import re import requests import requests.exceptions from urllib.parse import urlsplit from collections import deque from bs4 import BeautifulSoup # starting url. replace google with your own url. starting_url = '' # a queue of urls to be crawled unprocessed_urls = deque([starting_url]) # set of already crawled urls for email processed_urls = set() # a set of fetched emails emails = set() # process urls one by one from unprocessed_url queue until queue is empty while len(unprocessed_urls): # move next url from the queue to the set of processed urls url = unprocessed_urls.popleft() processed_urls.add(url) # extract base url to resolve relative links parts = urlsplit(url) base_url = "{0.scheme}://{0.netloc}".format(parts) path = url[:url.rfind('/')+1] if '/' in parts.path else url # get url's content print("Crawling URL %s" % url) try: response = requests.get(url) except (requests.exceptions.MissingSchema, requests.exceptions.ConnectionError): # ignore pages with errors and continue with next url continue # extract all email addresses and add them into the resulting set # You may edit the regular expression as per your requirement new_emails = set(re.findall(r"[a-z0-9\.\-+_]+@[a-z0-9\.\-+_]+\.[a-z]+", response.text, re.I)) emails.update(new_emails) print(emails) # create a beutiful soup for the html document soup = BeautifulSoup(response.text, 'lxml') # Once this document is parsed and processed, now find and process all the anchors i.e. linked urls in this document for anchor in soup.find_all("a"): # extract link url from the anchor link = anchor.attrs["href"] if "href" in anchor.attrs else '' # resolve relative links (starting with /) if link.startswith('/'): link = base_url + link elif not link.startswith('http'): link = path + link # add the new url to the queue if it was not in unprocessed list nor in processed list yet if not link in unprocessed_urls and not link in processed_urls: unprocessed_urls.append(link) Constructive feedback is always welcomed.
https://pythoncircle.com/post/217/python-script-2-crawling-all-emails-from-a-website/
CC-MAIN-2021-43
refinedweb
378
59.4
Hello, i've been messing around with basic scripting in java, and now i'm trying to get exceptions down. Here is the code: /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package First; import java.util.InputMismatchException; import java.util.Scanner; /** * * @author Office */ public class Constructorexample { public int total; public static int yourage; public static String yourname; static Scanner s = new Scanner(System.in); public static void getnameandage(String name, int age){ int l = name.length(); int ax2 = age*2; System.out.println(); System.out.println("The total length of your name is " + l); System.out.println(); System.out.println("Your age x 2 = " + ax2); } public static void main(String[] args) { System.out.println("Please enter your name:"); try{ yourname = s.nextLine(); }catch (Exception e){ System.out.println("Wrong input!"); } System.out.println(); System.out.println("Now please enter your age:"); yourage = s.nextInt(); getnameandage(yourname, yourage); } } I want to check if the user input a number instead of letters, and vice verse for the second prompt. I've tryed a try/catch block around yourname=s.nextLine(); and this stop the program from stopping, but it wont display text like I ask it to. Could someone help me out here? I also would like the program to ask the question again once the error is displayed, how would I do that? Thanks for any help!
http://www.javaprogrammingforums.com/exceptions/30104-exception-catching-but-im-getting-no-output-console.html
CC-MAIN-2015-18
refinedweb
233
60.61
Ubic::Cmd::Results - console results set version 1.49 use Ubic::Cmd::Results; $results = Ubic::Cmd::Results->new; $results->print($result); $results->print($result, 'bad'); $results->print($result, 'good'); $code = $results->finish; # prints final statistics and returns supposed exit code This class controls the output of service actions. This is considered to be a non-public class. Its interface is subject to change without notice. Constructor. Print given strings in red color if stdout is terminal, and in plain text otherwise. Print given strings in green color if stdout is terminal, and in plain text otherwise. Print given Ubic::Result::Class object. $type can be "good" or "bad". If $type is specified, it is taken into consideration, otherwise result is considered good unless it is "broken". Add result without printing. Get all results. Get exit code appropriate for results. It can be detected dynamically based on results content, or set explicitly from Ubic::Cmd, depending on command. Set exit code explicitly. Print error if some of results are bad, and return exit.
http://search.cpan.org/~mmcleric/Ubic-1.49/lib/Ubic/Cmd/Results.pm
CC-MAIN-2018-09
refinedweb
172
52.66
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).+") But all the saved files are the same and I wanted each file to export as different frame, already polygonized. Is this possible? hi, of course my example doesn't take care of material applied to the object, you need a bit of extra work for that. But shouldn't be too hard. thanks for the scene Cheers, Manuel hello, This is not really clear to us : If your code is working for obj, it should work with c4d. We are not sure if the problem is in your code or in your scene. I've created that script, it's working with a simple cube animated with keyframes. Just a remark about your code, you should use is not None is not None if poly_doc is not None: ..... dosomething import c4d from c4d import gui import os # Welcome to the world of Python # Main function def main(): # Retrieves the file name and path file_path = doc.GetDocumentPath() file_oriname = doc.GetDocumentName()[:-3] #remove .c4d. doc.ExecutePasses(None, True, True, True, c4d.BUILDFLAGS_NONE) doc.ExecutePasses(None, True, True, True, c4d.BUILDFLAGS_NONE) currentFrame = doc.GetTime().GetFrame(doc.GetFps()) # Export the C4D file file_name = os.path.join(file_path,file_oriname + str(currentFrame) + ") # Execute main() if __name__=='__main__': main() I'm trying to export a Dynamics MoGraph animation, with a Voronoi Fracture object. If I export as OBJ, it works fine. If I export as C4D, all the saved files are the same, and have no animation. it seem to have a bug with Polygonize, we are investigating at the moment. Some exporter are using this function other aren't. .obj is not that's probably why it's working with obj. The Polygonize function is just a "current state to object" for each object in the scene. You can build your own Polygonize function. I'll be back when i have more information Something like this maybe. I needed to create my own "insertLast" function because the GeListHead was not alive in the new document. But i don't build my scene with dynamics, could you send us your scene at [email protected] ? sorry for the small amount of comment. import c4d from c4d import gui import os # Welcome to the world of Python def GetLast(doc): # retrieves the last element of the document child = doc.GetFirstObject() if child is None: return None while child: if child.GetNext() is None: return child child = child.GetNext() def InsertLast(doc, op): # Insert object after the last object in the scene. last = GetLast(doc) if last is None: # that mean this is the first element added in the scene doc.InsertObject(op) else: op.InsertAfter(last) def CSTO(op, keepAnimation = False ): # Current state to object. # the Keep animation will be set in the basecontainer for the command doc = op.GetDocument() # send the modeling command bc = c4d.BaseContainer() bc.SetBool(c4d.MDATA_CURRENTSTATETOOBJECT_KEEPANIMATION, keepAnimation) res = c4d.utils.SendModelingCommand(command = c4d.MCOMMAND_CURRENTSTATETOOBJECT, list = [op], mode = c4d.MODELINGCOMMANDMODE_ALL, bc = bc, doc = doc) # Cheks if the command didn't failed if res is False: raise TypeError("return value of CSTO is not valie") # Returns the first object of the list. return res[0] def GetNextObject(doc): # get he next object in the scene, only the first level of hierarchy is used. op = doc.GetFirstObject() while op: yield op op=op.GetNext() def MyPolygonize(doc, keepAnimation = False): # For each first level element, call a CSTO and store the result in a new document dst = c4d.documents.BaseDocument() if dst is None: raise ValueError("can't create a new document") for op in GetNextObject(doc): res = CSTO(op, keepAnimation) InsertLast(dst, res) return dst def main(): file_path = doc.GetDocumentPath() file_oriname = doc.GetDocumentName()[:-3] #remove .c4d dst = None. # Using the flag BUILDFLAGS_INTERNALRENDERER will allow to have the voronoi fracture cache to be calculated doc.ExecutePasses(None, True, True, True, c4d.BUILDFLAGS_INTERNALRENDERER) currentFrame = doc.GetTime().GetFrame(doc.GetFps()) # Export the C4D file file_name = os.path.join(file_path,file_oriname + str(currentFrame) + ".c4d") dst = MyPolygonize(doc, False) if dst is not None: c4d.documents.SaveDocument(dst,file_name,c4d.SAVEDOCUMENTFLAGS_DONTADDTORECENTLIST | c4d.SAVEDOCUMENTFLAGS_SAVECACHES,c4d.FORMAT_C4DEXPORT) # Execute main() if __name__=='__main__': main() Thank you very much, Manuel. I will send the scene to the e-mail you provided. You're welcome, Manuel. Actually, I usually prepare all the textures and mapping to be in UVW mapping, when texturing is required. But, mainly, what I need is exporting geometry that is animated.
https://plugincafe.maxon.net/topic/12308/exporting-polygonized-scene/?
CC-MAIN-2022-27
refinedweb
772
59.4
#include <stdint.h> Go to the source code of this file. Alarm functions Simple alarm-clock functionality supplied by eal. Does not require hpet support. Definition in file rte_alarm.h. Signature of callback back function called when an alarm goes off. Definition at line 26 of file rte_alarm.h. Function to set a callback to be triggered when us microseconds have expired. Accuracy of timing to the microsecond is not guaranteed. The alarm function will not be called before the requested time, but may be called a short period of time afterwards. The alarm handler will be called only once. There is no need to call "rte_eal_alarm_cancel" from within the callback function. Function to cancel an alarm callback which has been registered before. If used outside alarm callback it wait for all callbacks to finish execution.
https://doc.dpdk.org/api-20.11/rte__alarm_8h.html
CC-MAIN-2021-39
refinedweb
136
69.58
9 Downloads Updated 20 May 2019View Version History This submission provides the following files to assist the programmer in creating robust C/C++ mex code that can compile and run under multiple MATLAB versions: (1) matlab_version.h provides pre-processor code that defines the macro MATLAB_VERSION, which will contain the hex number equivalent of the MATLAB version that is being used for the compile (e.g., a value of 0x2015b would indicate MATLAB version R2015b). It also defines the macro TARGET_API_VERSION to differentiate between the R2017b and R2018a API libraries being used. And it contains a prototype for the matlab_version() function that is contained in the matlab_version.c file. (2) matlab_version.c provides code for the matlab_version() function, which returns the hex number equivalent of the MATLAB version that is currently being run. (via a mexCallMATLAB callback) (3) matlab_version_test.c provides code for a mex routine that tests matlab_version.h and matlab_version.c (4) matlab_version_test.m provides m-code that will automatically compile the matlab_version_test.c file. To test all of this, simply type at the command line: >> matlab_version_test There are also extensive comments at the front end of the matlab_version.h file that describe an assortment of mxArray and API changes over the years. The idea is that the macro MATLAB_VERSION can be used to determine whether various library functions or features that the programmer is depending on are actually available in the version being used for the compile. E.g., the programmer could use an #if - #else - #endif block to conditionally compile different code depending on the value of MATLAB_VERSION. And the matlab_version( ) function result can be used to determine if a MATLAB function the programmer wants to call with mexCallMATLAB is actually available in the current version of MATLAB being run. Tested under various Win32 and Win64 versions of MATLAB from R2009a - R2018a. James Tursa (2020). C Mex MATLAB Version (), MATLAB Central File Exchange. Retrieved . Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting! Create scripts with code, output, and formatted text in a single executable document.
https://www.mathworks.com/matlabcentral/fileexchange/67016-c-mex-matlab-version
CC-MAIN-2020-45
refinedweb
347
55.44
Installing Puppet for Junos OS Setting Up the Puppet Master Juniper Networks provides support for using Puppet to manage certain devices running Junos OS. The Puppet master must be running Puppet open-source edition. Table 1 outlines the version of Puppet that must be installed on the Puppet master in order to manage the different Junos OS variants and releases of Puppet for Junos OS on the client. The Puppet master must also have the following software installed in order to use Puppet to manage devices running Junos OS: Juniper Networks NETCONF Ruby gem—Ruby gem that enables device management using the NETCONF protocol. netdevops/netdev_stdlib Puppet module—includes the Puppet type definitions for the netdev resources. juniper/netdev_stdlib_junos Puppet module—includes the Junos OS-specific code that implements each of the types. When you install this module on the Puppet master, it automatically installs the netdev_stdlib module. To configure the Puppet master for use with devices running Junos OS: - Install Puppet open-source edition. See the Puppet website for Puppet installation instructions. - Install the Juniper Networks NETCONF Ruby gem using the command appropriate for your Puppet master installation. - Install or upgrade the Juniper Networks netdev_stdlib_junos Puppet module. To install the netdev_stdlib_junos module, execute the following command on the Puppet master, and specify the module version required to manage your particular devices. To upgrade the module when you have an older version installed, use the upgradeoption. - Set up the puppet.conf file on the Puppet master. For information about the configuration file, see Setting Up the Puppet Configuration File on the Puppet Master and Puppet Agents Running Junos OS. The Puppet agent identifies with the Puppet master using SSL. By default, the puppet master service does not sign client certificate requests. As a result, the Puppet master must approve the agent certificate the first time an agent tries to connect to the master. After the Puppet agent node is configured and running, approve the client certificate on the Puppet master by using the command appropriate for your installation, for example, by using the puppet cert sign host command or the puppetserver ca sign --certname host command. Configuring the Puppet Agent Node Juniper Networks provides support for using Puppet to manage certain devices running Junos OS. The setup on the agent node depends on the device and the Junos OS variant running on the device. Certain devices require installing the Puppet agent package on the device, other devices have the Puppet agent integrated into the software image, and some devices support running the Puppet agent as a Docker container. To verify support for a specific platform and determine which setup to use for a given device and Junos OS release, see Puppet for Junos OS Supported Platforms. Table 2 outlines the tasks required to configure the Puppet agent node for the different types of setups. To configure the node, perform the steps in each linked task. OCX1100 switches, QFX Series switches running Junos OS with Enhanced Automation, and devices running Junos OS Evolved have the Puppet agent integrated with the software. If the device also supports using the Puppet agent Docker container, you can elect to run the Puppet agent as a Docker container instead of using the integrated Puppet agent. - Installing the Puppet Agent Package - Configuring the Junos OS User Account - Configuring the Environment Settings - Starting the Puppet Agent Process - Using the Puppet Agent Docker Container Installing the Puppet Agent Package To install the Puppet agent on devices running Junos OS that do not have the agent integrated into the software: - Determine the jpuppet software package required for your platform and release at Puppet for Junos OS Supported Platforms. - Access the download page at. - Select the release folder corresponding to the Puppet for Junos OS release to download. - Download to the /var/tmp/ directory on the agent device the jpuppet software package that is specific to your platform or device microprocessor architecture, depending on the Puppet for Junos OS release.Note: Starting in Puppet for Junos OS Release 2.0, the jpuppet packages are specific to the microprocessor architecture. In earlier releases, the packages are specific to a particular platform. If you do not know the microprocessor architecture of your device, you can use the UNIX shell command uname -a to determine it.Note: We recommend that you install the jpuppet software package from the /var/tmp/ directory on your device to ensure the maximum amount of disk space and RAM for the installation. - Configure the provider name, license type, and deployment scope associated with the application. - Install the software package using the request system software addoperational mode command, and include the no-validateoption. - Verify that the installation is successful by issuing the show versioncommand. The list of installed software should include the jpuppet package. For example:Note: The package name might vary depending on the Puppet for Junos OS release. Configuring the Junos OS User Account You must configure a user account to run the Puppet agent. The user must have configure, control, and view permissions. You can configure any username and authentication method for the account. To configure a Junos OS user account to run the Puppet agent: - Configure the account username, login class, authentication method, and shell. - Commit the configuration. Configuring the Environment Settings Set up the directory structure and environment settings on any agent nodes on which you installed the Puppet agent package or that use the Puppet agent that is integrated with the software image. To configure the necessary directory structure and environment settings to run the Puppet agent: - Log in to the agent node using the Puppet account username and password. - If you are not already in the UNIX-level shell, enter the shell. - Create a $HOME/.cshrc file, and include the content corresponding to the variant of Junos OS and the release of Puppet for Junos OS installed on the device, which is outlined in Table 3. - Exit the device and log back in using the Puppet account username and password. - If you are not already in the UNIX-level shell, enter the shell. - Verify that the jpuppet code is installed and that the PATH variable is correct by running Facter, which should display device-specific information. For example: - Create the following $HOME/.puppet directory structure: - Place your puppet.conf file in the $HOME/.puppet directory. For information about the configuration file, see Setting Up the Puppet Configuration File on the Puppet Master and Puppet Agents Running Junos OS. Starting the Puppet Agent Process Devices that have the Puppet agent integrated into the software require that you start the Puppet agent process on the device. Start the Puppet agent process after configuring the Junos OS user account and environment settings. To start the Puppet agent process: - Enter the shell. - Start the Puppet agent process by executing the puppet agentcommand, and include any desired options. Note: For example, on devices running Junos OS or Junos OS with Enhanced Automation: On devices running Junos OS Evolved, switch to the default VRF for management traffic, vrf0, and then start the agent. You can choose to define the server settings in your Puppet configuration file instead of specifying the settings as command options. Using the Puppet Agent Docker Container Certain devices running Junos OS Evolved support running the Puppet agent as a Docker container. Docker is a software container platform that is used to package and run an application and its dependencies in an isolated container. Juniper Networks provides a Docker image for the Puppet agent on Docker Hub. When you run the Puppet agent using the Docker container, the container: Shares the hostname and network namespace of the host Uses the host network to communicate with the Puppet server Authenticates to the host using key-based SSH authentication To use the Puppet agent Docker container on supported devices: - Log in as the root user. - Switch to the default VRF for management traffic, vrf0. - Start the Docker service, and bind it to the default VRF for management traffic, vrf0. - Set the DOCKER_HOSTenvironment variable. - Start the Puppet agent Docker container as follows, and set the NETCONF_USERto the Junos OS user account that was set up to run the agent. - Generate the SSH key pair that will be used to authenticate the container to the host. - Copy the public key to the host, and add it to the root user’s authorized_keys file. - Verify the connection from the container to the host. - Place your puppet.conf file in the container’s /etc/puppet directory.Note: For information about the configuration file, see Setting Up the Puppet Configuration File on the Puppet Master and Puppet Agents Running Junos OS. - Start the Puppet agent. - On the Puppet master, accept the agent’s keys using the command appropriate for your installation. Setting Up the Puppet Configuration File on the Puppet Master and Puppet Agents Running Junos OS The Puppet configuration file, puppet.conf, defines the settings for the Puppet master and agent nodes. It is an INI-formatted file with code blocks that contain indented setting = value statements. The main code blocks are: [master]—settings for the Puppet master. [agent]—settings for the agent node. [main]—global settings that are used by all commands and services. The settings in the [master]and [agent]blocks override those in [main]. On the Puppet master, the configuration file resides at $confdir/puppet.conf. On agent nodes running Junos OS, the location depends on your setup. Table 4 outlines the location where the Puppet configuration file should reside for a given setup on devices running Junos OS. Creating environment-specific Puppet configuration files is beyond the scope of this document. However, when using Puppet to manage devices running Junos OS, the Puppet master and agent node puppet.conf files must contain the following statement within the [main] configuration block: In addition, client devices running Junos OS Evolved must include the certname statement in the puppet.conf file and specify the node’s certificate name. The Puppet master uses the certificate name, which can be a hostname, an IP address, or any user-defined name in lowercase characters, to identify the client. The following example shows a sample puppet.conf file for an agent node running Junos OS: The following example shows a sample puppet.conf file for an agent node running Junos OS Evolved: For more information about Puppet configuration files, see the Puppet website at. Configuring the Puppet for Junos OS Addressable Memory On devices running Junos OS, the amount of memory available to Puppet is 64 MB by default. You can expand the usable memory to the system maximum values as defined in Table 5. To expand the amount of memory available to the Puppet agent execution environment, including the Puppet agent and Facter processes: - Log in to the Puppet agent using the Puppet user account username and password. - In the Puppet user $HOME/.cshrc file, add the limit data memorycommand to the file. For example:
https://www.juniper.net/documentation/us/en/software/junos-puppet/junos-puppet/topics/topic-map/junos-puppet-installing.html
CC-MAIN-2022-05
refinedweb
1,822
54.12
Flash ActiveX's CallFunction method always fails (E_FAIL)AJet1234 Jun 8, 2007 6:46 AM I likely have collected all information in the web, but still facing the problem. I am trying to host the Flash ActiveX in a C# program and establish two-way communication between the host application and the ActionScript contained in my SWF file. On the ActionScript side, I use the ExternalInterface class. On the ActiveX side, for callbacks, I use the IShockwaveFlash:: FlashCall event, which works perfectly in all host applications I have experimented with. For direct calls, I use IShockwaveFlash:: CallFunction() method, which doesn't work on some host applications (unfortunately those I need). It fails with COM error (HRESULT E_FAIL, "Unspecified error"). Here is what I have done so far: 1) Installed the latest Flash Player 9, registered Flash9c.ocx ActiveX. 2) Granted Flash security permission to the folder where my SWF is located by prescribing it in "C:\Documents and Settings\myname\Application Data\Macromedia\Flash Player\#Security\FlashPlayerTrust\myapp.cfg" Before I did this, the FlashCall event caused a SecurityError reported from the Flash player. So it makes me think that my problem is not a security issue any more. 3) Tested the SWF file hosted in a browser (both IE and Firefox). The two-way communication with JavaScript works perfectly in both ways, so it means there's no mistake in my ActionScript code, and the way I call ExternalInterface methods is correct. 4) From JavaScript, I tried the following two ways of calling the ActionScript function (called "Handshake") in the SWF movie object: // JavaScript code // call directly swfMovieObject.Handshake( "hello world" ); // call via CallFunction swfMovieObject.CallFunction( "<invoke name="Handshake" returntype="xml"><arguments><string>hello world</string></arguments></invoke>" ); the both methods also worked perfectly, which means the <invoke> xml string I am passing is correct. 5) When hosted in VB6 and on MS Access 2003 Form, the CallFunction method works perfectly. 6) Finally, the CallFunction method fails to work when hosted in Word 2003 document, Excel 2003 worksheet, VBA form in Word 2003 or Excel 2003, and also in a C# program written in Visual Studio 2005.. Please help! 1. Re: Flash ActiveX's CallFunction method always fails (E_FAIL)FeyFre Aug 15, 2007 5:30 AM (in response to AJet1234)Hello, AJet1234 I have this problem too and I think I found why this error ocurs, but I still do not know ho to reslove it. Not only C# projects have such bug, C++ and Assembler code also failed with it. My task for now is to write plugin for some program. This plugin provides user interface for some internal beings of program. User interface done using Flash movie(Flex 2) played usnig ActiveX Flash control. To comunicate between plugin and movie I use CallFunction method of IShockwaveFlash interface. On the first steps I wrote simple application which emualtes behavior of program. All works perfectly. But when I ported communcation code into a DLL CallFunction begun to return E_FAIL value. I think the reason is that ActiveX Control checks in which module it was created. If it was created in startup module(someprogam.exe) it works perfectly. But when it was created in DLL module(various plugins etc) it disables some features(I think for security reason). One of those features is method callFunction. I lost 3 month trying reslove this problem. quote:. I tried 5 different machine with different configurations but that don't reslove problem. Best regards 2. Re: Flash ActiveX's CallFunction method always fails (E_FAIL)AJet1234 Aug 19, 2007 9:00 PM (in response to AJet1234)Thanks FeyFre, I guess you are right. I've got another suspicion that this bug may be due to some multi-threading issues but it's hard to tell. I wonder if Adobe can hear us and fix this bug in the following version of Flash Player. Through the previous versions, this bug survived over and over again :( Currently, I've created the following workaround: in Flash movie, I set a timer which sends (e.g. 10 times a second) the hosting program a request for a command(s) to execute. In this way the direct call (CallFunction) is no longer needed and replaced with callback (ExternalInterface.call). I know it's an ugly solution, but it works and I don't observe any tangible performance issues. Hope this can be helpful to others. 3. Re: Flash ActiveX's CallFunction method always fails (E_FAIL)FeyFre Aug 20, 2007 6:52 AM (in response to AJet1234)Hello, AJet1234 Workaround, you offered, is widely used among other programmers, but it looks like sadism. I offer you to try another way, but you must be a litle familiar with writing COM servers(I have no other choice;-( ). I'll try it soon and advice you to try it. I offer to write own AxtiveX Control whitch will create Flash control. The thing is that the server which serves control must be LocalServer i.e. exe program, in order to use all feature of Flash Control instance createt by it(including CallFunction which calls now must be successfully completed). This is very simple to Aggregation(in COM terminology), and hope it will be working. Best regards PS: I understand my workaround also looks like sadism, but "What can I do?" 4. Re: Flash ActiveX's CallFunction method always fails (E_FAIL)TulipWin Apr 7, 2009 8:30 AM (in response to AJet1234) Hi Ajet, Would you please have a code snippet on the workaround? I couldn't get it worked so I guess I may understand it wrong. Thanks! 5. Re: Flash ActiveX's CallFunction method always fails (E_FAIL)mikezeli Jun 24, 2009 12:33 PM (in response to TulipWin) Just figured this one out. I'm using VC++ 6 with ActiveX Flash 10. The code snippets here will call the Flash function testFunction from VC++ using the CallFunction command and the appropriate XML. In VC++, make the following call assuming that m_flashGUI is the CShockwaveFlash object added to your dialog. CString ret = m_flashGUI.CallFunction("<invoke name=\"testFunction\" returntype=\"xml\">" "<arguments><string>something</string></arguments></invoke>"); The key item in the xml string is the "name" parameter. It must match the name in the addCallback function in the Flash movie. In the flash movie, have something like the following. The addCallback call is important. Without it the CallFunction from C++ will throw an exception. // Import the flash items import flash.events.*; import flash.external.ExternalInterface; // Associate the flash function with the external call flash.external.ExternalInterface.addCallback("testFunction", testFunction); function testFunction(str:String):Boolean { // Do something here... return (true); } Good luck! Mike 6. Re: Flash ActiveX's CallFunction method always fails (E_FAIL)TulipWin Jun 24, 2009 3:34 PM (in response to mikezeli) Hi, Thanks for your response. That was what I did in C#, but it didn't work when our application is launched inside Excel, outside Excel, everything works. Thanks, Thao
https://forums.adobe.com/message/84670?tstart=0
CC-MAIN-2015-11
refinedweb
1,159
64
Note : If you’re interested in machine learning, you can get a copy of my E-book, “The Mostly Mathless Guide to TensorFlow Machine Learning” by clicking HERE There’s a bunch of kids running around with a Coca-Cola in their hands. But hold on — look at their clothes! They’re so clean and white. Too clean, almost. Could this be a Tide ad? Machine learning to the rescue! In this article, I’ll be showing you how to use TensorFlow, a machine learning library, to predict whether or not an ad is a Tide ad. Prerequisites This tutorial will be using Linux. You can probably do it on Windows too, but you may have to change some things. Here are the things you will need : 1. Python 3 2. TensorFlow (pip3 install tensorflow) 3. Keras (pip3 install keras) 4. ffmpeg (sudo apt-get install ffmpeg) 5. h5py (pip install h5py) 6. HDF5 (sudo apt-get install libhdf5-serial-dev) 7. Pillow (pip3 install pillow) 8. NumPy (pip3 install numpy) Although VirtualEnv is not required, it is suggested that you use VirtualEnv to prevent any conflicts / version mistakes between Python 2 and Python 3. Also, you can find all the code and bash scripts here : Getting Started First, we need to describe what our neural network will do. In this case, our neural network will will take one image as input, and tell you whether or not that image belongs to a Tide ad or not. Using ffmpeg, we can split a video into its frames to input an entire video into the neural network as well, and if over 50% of the frames in a video are classified as “Tide ads”, then we will consider it to be a Tide ad. Next, we need data for our neural network to train on. The data will be a large set of .png images that we will get from slicing a video into individual pictures. I will not provide the videos as a download here, so you will need to find the 1 minute 45 second video of all the SuperBowl Tide ads, as well as 5 minutes worth of non-Tide ads. Also, the two videos should have the same size dimensions so that the images that come out are all the same size. Once you obtain these two videos, convert them into .avi format and use ffmpeg to split them into its constituent frames. I’ve created a simple Bash script that will do the splitting process automatically for you, as long as you name the Tide ad video “tide.avi” and the non-Tide ad video “non_tide.avi”. You can find the script here : The script above will take the two videos, and split 5 frames per second of the video, each frame being 512 x 288, into two separate folders. You can choose to do this on your own as well, but in this tutorial, as a convention, all Tide ad pictures will be in a directory called “tide_ads”, and all non-Tide ad pictures will be in a directory called “non_tide_ads”. We’ll have to do the same with the test data, and the prediction data, and these are the bash scripts for those: For the predictions, you can input any video format as the argument for the Bash script, but .avi is suggested for consistency. NOTE : Remember to use chmod +x BASH_SCRIPT_NAME.sh on all of the Bash scripts so that you can execute them! Creating A Convolutional Neural Network Although this analogy is not perfect, you can think of a neural network as a group of students in a classroom who are all shouting out an answer. In this classroom, students are trying to determine whether or not a single image is from a Tide ad. Some students have a louder voice, so their “vote” for an answer counts more. A neural network can be thought of as thousands of students, all shouting different answers. The loudest answer gets passed to the next classroom, and those students discuss the answer (with their answers being modified by the previous classroom’s answer, perhaps by peer pressure), until we reach the very last classroom, where the loudest answer is the answer for the neural network. A convolutional neural network (CNN) is similar in that the students are still looking at an image, but they are only looking at a piece of the image. When they finish analyzing this image, they pass it to the next classroom, but the next classroom gets an even tinier piece of the original image. And so on and so forth, with each next classroom’s image getting smaller and smaller. When they’re done, a vote is outputted. This is different in that in a normal neural network, the students vote for the entire image at once, but in a CNN, they each only vote for a piece of the image. Now that you know the basics, let’s jump into the code. Making The Magic Happen First, let’s define the parameters for our CNN. # Could use larger dimensions, but will make training # times much much longer img_width, img_height = (128, 72) train_dir = 'train_data' test_dir = 'test_data' num_train_samples = 4000 num_test_samples = 2000 epochs = 20 batch_size = 8 Each image will be shrunk to 128 x 72 pixels. Although we could go smaller, we would risk losing too much information. Larger could be better, but the larger the image dimensions are, the longer it will take for the CNN to train. Next, we’ll have to specify in our CNN whether the color channels are first or last. Usually, they are first, though (at least, for png files). Note that an image could have just one color channel if it was grayscale, but in this case, we will only be using color images. We have to specify these because Keras will reduce our images into NumPy arrays, and the ordering matters. # If data is formatted to have the channels first, # then stick the RGB channels in front, else put them # at the end. if K.image_data_format() == 'channels_first': input_shape = (3, img_width, img_height) else: input_shape = (img_width, img_height, 3) Now that our CNN knows how many color channels (3 means RGB) our pictures have, as well as the dimensions of the image, we can apply three layers of convolution -> RELu -> max pooling. The convolution step is essentially taking a tiny matrix and multiplying it to sections of the original matrix. When this is done, the result of each multiplication is recorded into a new matrix. The result is that we get a smaller matrix, which is called a feature map, containing unique features of the image that the machine can then process. Next, we will need to apply ReLu, or Rectified Linear Unit. ReLu is incredibly simple. If the number is negative, make it zero, otherwise, leave it alone. This is because for a neural network, a negative value doesn’t really offer much information in the context of an image. Imagine if we were detecting whether or not an image has dark blue lines. A value of zero in the feature map just means that there are no dark blue lines, while a positive value means that there might be a dark blue line. If so, then a negative value has no useful meaning, and can just be set to zero. ReLu also makes computations easier because zeroes are incredibly easy to deal with. Finally, we apply max pooling, which takes subsections of a matrix and extracts the highest value from that subsection. This will shrink the matrix,reducing computation times, as well as giving us the most important parts of the matrix. For our Tide ad predictor, we’ll use three layers of convolution, ReLu, and max pooling. When we’re done, we’ll put it all together with a fully connected layer. # First convolution layer model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=input_shape)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) # Second convolution layer model.add(Conv2D(32, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) # Third convolution layer model.add(Conv2D(64, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) # Convolution is done, so make the fully connected layer model.add(Flatten()) model.add(Dense(64)) model.add(Activation('relu')) Then, we apply dropout. Dropout, in our classroom analogy, is like duct taping certain students so that they cannot vote. By using dropout, every student needs to learn to give the right answer, rather than depending on the “smart” students to give the right answers. This gives us an extra layer of redundancy and prevents the loudest students from always overpowering the quiet students. In the context of neural networks, it stops overfitting, which is when the neural network becomes too accustomed to the training data and fails to generalize for data that it hasn’t seen before. This can be caused when certain neurons have their weights modified too much by the previous layer’s weights, especially when one neuron has an abnormally high weight. Dropout makes it so that the influence of any one neuron (or group of neurons) is significantly reduced. From here, everything is pretty self-explanatory. We create some extra data points by slightly modifying the original images, and then plug those into our model. Finally, we train the data and save the completed CNN model. # perform random transformations so that the # data is more varied train_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1. / 255) # make extra training data by modifying original training images train_generator = train_datagen.flow_from_directory( train_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') # make extra test data by modifying original test images validation_generator = test_datagen.flow_from_directory( test_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') # Train the CNN model.fit_generator( train_generator, steps_per_epoch=num_train_samples // batch_size, epochs=epochs, validation_data=validation_generator, validation_steps=num_test_samples // batch_size) # Saved CNN model for use with predictions model.save('saved_model.h5') If you don’t want to spend the time finding videos and training the CNN, the trained model is available in the GitHub repository here : Predicting Whether a Video Is a Tide Ad The prediction part is fairly trivial. All we need to do is take a video, turn it into image frames, turn those image frames into NumPy arrays, and then feed them into the trained CNN. The video splitting part is done by generate_predictions.sh. All this Python code below does is feed those image frames into the CNN. If more than half of the frames aren’t Tide ads, then we can conclude that the video probably isn’t a Tide ad. (note that in the prediction array, a value of 1 means it is NOT a Tide ad, and a value of 0 means that it IS a tide ad) import numpy as np import subprocess from keras.preprocessing import image from keras.models import load_model def img_to_array(file_name): loaded_img = image.load_img(file_name, target_size=(img_width, img_height)) img_array = image.img_to_array(loaded_img) img_array = np.expand_dims(img_array, axis=0) return img_array if __name__ == "__main__": directory = "predictions/" num_images = int(subprocess.getoutput("ls predictions | wc -l")) img_width, img_height = (128, 72) # load the saved model, and use it for prediction model = load_model("saved_model.h5") images = [] for i in range(1, num_images+1): # Need up to 6 leading zeroes for formatting i_ = "{:0>7}".format(i) next_img = img_to_array(directory + "prediction-" + str(i_) + ".png") images.append(next_img) images = np.vstack(images) prediction = model.predict(images) percent_tide = sum(prediction)[0] * 100 / num_images print("There is a " + str(percent_tide) + "% chance this is not a Tide ad.") # If more than half the images are NOT Tide ads if(sum(prediction) >= num_images / 2): print("It can be concluded that this is NOT a Tide ad!") else: print("It can be concluded that this IS a Tide ad!") Conclusion How well does this work? Well, not too well. This neural network uses a pitifully small data set, and was trained for very few epochs, so it makes sense that the results are not particularly accurate. Inputting in Pepsi’s 2018 SuperBowl ad gives the following result : There is a 45.833333333333336% chance this is not a Tide ad. It can be concluded that this IS a Tide ad! And inputting in Skittle’s 2018 SuperBowl ad gives this result : There is a 53.4675615212528% chance this is not a Tide ad. It can be concluded that this is NOT a Tide ad! So, does this make every SuperBowl ad a Tide ad? Our neural network seems to think it does. Almost.
https://henrydangprg.com/2018/02/
CC-MAIN-2022-27
refinedweb
2,097
64.1
Recursive Translations of Message strings with mappings Bug Description The following is not really a bug, but a missing feature. I need to translate messages that contain mappings with messages. Zope3 currently only translates the toplevel message and leaves the mappings untranslated, for instance: --------------- snip ------------------ from zope.component import provideUtility from zope.i18n import translate from zope.i18n. from zope.i18nmessageid import Message, MessageFactory _ = MessageFactory( messages = { ('en', u'pink'): u'pink', ('de', u'pink'): u'rosa', ('en', u'colour'): u'The colour is $pink', ('de', u'colour'): u'Die Farbe ist $pink'} xyz = SimpleTranslati provideUtility(xyz, name=u'xyz') pink = _(u'pink', u'pink') pink = Message(pink) print translate(pink, target_ print translate(pink, target_ colour = _(u'colour', u'The colour is $pink') colour = Message(colour, mapping={'pink' : pink}) print translate(colour, target_ print translate(colour, target_ ------------ snip ----------------- The output of the above program is: ------ pink rosa The colour is pink Die Farbe ist pink ------ It can be seen that the string "pink" is not translated if used as mapping in another string to be translated. The problem is easily solved by modifying the translate function in i18n.translatio def translate(self, msgid, mapping=None, context=None, """See zope.i18n. ..... # MessageID attributes override arguments if isinstance(msgid, Message): if msgid.domain != self.domain: mapping = msgid.mapping default = msgid.default # FIX BEGIN # Recursively translate mappings, if they are translatable if isinstance(mapping, dict): for key, value in mapping.items(): # FIX END if default is None: default = unicode(msgid) The fix works perfectly for me, however, I'm not sure if it would break other things in Zope3, moreover one should probably write quick test for it. Ok, I added the above test with circular references, I raise a RuntimeError in this case (could not think of a better exception). Moreover I now do a mapping.copy() and use parantheses for wrapping long lines. What I still think of is if this recursive translation should also be included in the __init_ What do you think? On 2 Apr 2008, at 13:06 , Hermann Himmelbauer wrote: > Ok, I added the above test with circular references, I raise a > RuntimeError in this case (could not think of a better exception). I suggest a ValueError instead. > Moreover I now do a mapping.copy() and use parantheses for wrapping > long > lines. Thanks, but the wrapping isn't exactly done how it's supposed to be. This is the correct form: if (mapping is not None and Message in [type(m) for m in mapping.values()]): I'm not very happy with the newest patch for other reasons, however. You changed the signature of the translate() method. You can't do that, it's governed by the ITranslationDomain interface. Public interfaces like that must not change. Stylistically, I have two things to nitpick about: * Comparisons to None should be made with the 'is' operator (see PEP8) * The way the 'to_translate' variable is used, a 'set' object would be more suitable than a list. Hey, any update on this? Since I'd need this rather soon I'd also fix the outstanding issues and check it in. Objections? On 18 Apr 2008, at 16:34 , Christian Zagrodnick wrote: > any update on this? Since I'd need this rather soon I'd also fix the > outstanding issues and check it in. > > Objections? Not from me! Am Freitag, 18. April 2008 16:34 schrieb Christian Zagrodnick: > Hey, > > any update on this? Since I'd need this rather soon I'd also fix the > outstanding issues and check it in. > > Objections? > > ** Changed in: zope3 > Importance: Undecided => Wishlist Well, I'd like to fix this but I'm still unsure about the interface problem (extra method attribute). What's your opinion on this? Best Regards, Hermann -- GPG key ID: 299893C7 (on keyservers) FP: 0124 2584 8809 EF2A DBF9 4902 64B4 D16B 2998 93C7 fixed in r85508 Not backporting because its a feature, right? On 20 Apr 2008, at 19:23 , Christian Zagrodnick wrote: > fixed in r85508 with quite a lot of XXX comments in it... Can you resolve those please? > Not backporting because its a feature, right? Right. Thanks! On 20.04.2008, at 21:33, Philipp von Weitershausen wrote: > On 20 Apr 2008, at 19:23 , Christian Zagrodnick wrote: >> fixed in r85508 > > with quite a lot of XXX comments in it... Can you resolve those > please? Gah. I at least forgot one of them (RuntimeError/ I'll create an issue for the other (Message interface). -- Christian Zagrodnick gocept gmbh & co. kg · forsterstrasse 29 · 06112 halle/saale · fon. +49 345 12298894 · fax. +49 345 12298891 There is an open issue, so back to in progress: The current implementation doesn't work if the messages in the mapping are in a different translation domain. Er, I seem to have fixed this bug without checking the bugtracker first: http:// http:// I'd appreciate a review if anyone has the time. The new diff looks much better. Thanks Hermann! I only have three more * I think we need a few more tests, especially for the edge cases. For instance, what happens when you do: msg1 = _('Message 1 and $msg2', mapping={}) mapping[ 'msg2'] = _('Message 2 and $msg1', mapping={'msg1': msg1. msg1}) I know, it's a bit of an edge case, but it should be dealt with. * Copying the dictionary shouldn't be done with copy.copy. It can easily be done with mapping.copy(). In other words, dictionaries have a copy() method, you should use it. * Style: Wrapping a long if-line is more nicely wrapped using parantheses, rather than backspace. I suppose it's a matter of taste, but PEP8 suggests using parantheses ("The preferred way of wrapping long lines is by using Python's implied line continuation inside parentheses, brackets and braces.").
https://bugs.launchpad.net/zope3/+bug/210177
CC-MAIN-2018-09
refinedweb
964
73.27
We have a multi-line control that we are attempting to prevent the Enter/Return key from being used to create a new line. Strangely enough, "AcceptsReturn" as False does not prevent this. So we added the following: Private Sub txtAddr_KeyPress(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyPressEventArgs) Handles txtAddr.KeyPress If e.KeyChar = Microsoft.VisualBasic.ChrW(13) Then e.Handled = True End If End Sub This works fine, however one of the QA people discovered hitting Control + Enter still puts in a newline. How would we prevent this? And why does AcceptsReturn being False not work as it appears it should? What is the intended purpose of it? Ctrl + enter is most likely to produce a line feed (ASCII 10). It may depend on the specific system though. If you are checking for carriage return (ASCII 13) and line feed though you probably have most bases covered. The AcceptsReturn property does something else. The Enter key normally operates the OK button on a dialog. With AcceptsReturn = true, the Enter key will enter a new line in the text box instead of activating the OK button Click event. Pressing Ctrl+Enter will generate a line feed, TextBox treats this as a new line as well. Use this KeyDown event to filter all combinations: private void textBox1_KeyDown(object sender, KeyEventArgs e) { if ((e.KeyData & Keys.KeyCode) == Keys.Enter) e.SuppressKeyPress = true; } I'm guessing you'd have to trap this in the KeyDown, rather than the KeyPress event.
http://www.dlxedu.com/askdetail/3/9b7cbd47fbace658a3b83b84e6155459.html
CC-MAIN-2018-43
refinedweb
251
59.09
button.onclick = new Function("func2()") + button.onclick Discussion in 'Javascript' started by foldface@yahoo.co.uk, Sep 25, 2005. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads Is A.func1(param1).func2(param2.)func3.(param3) legal ???, May 18, 2005, in forum: C++ - Replies: - 3 - Views: - 462 - Howard - May 18, 2005 Does "#if 1 func1 #else func2 #endif execute func1 during executation?Mr. Ken, Sep 6, 2006, in forum: C++ - Replies: - 3 - Views: - 557 This function has an onClick event that calls a function that calls This functionBob, Oct 24, 2006, in forum: Javascript - Replies: - 5 - Views: - 312 - Bob - Oct 24, 2006 Button with onClick inside Form (with Submit Button)Dr. Leff, Sep 10, 2007, in forum: Javascript - Replies: - 7 - Views: - 266 - Lee - Sep 11, 2007 Javascript new-new-new-new-newbee, Mar 10, 2008, in forum: Javascript - Replies: - 2 - Views: - 555 - Thomas 'PointedEars' Lahn - Mar 11, 2008
http://www.thecodingforums.com/threads/button-onclick-new-function-func2-button-onclick.920365/
CC-MAIN-2015-18
refinedweb
185
80.11
The default Xcode project template includes a storyboard for our application content. In this post, we’ll go over the steps we need to take to convert this project from storyboard for coded UI. The screenshots included in the post are from Xcode 13, but the steps should be pretty much identical for Xcode 12 and 11. Let’s start by creating a new iOS project by selecting the App template. In the next step, we need to choose the Storyboard option for the Interface type: In the Project Navigator (⌘ + 1), we can see the default project files. We can delete the Main.storyboard file as it has no use for us in this series. The same thing can be said for the SceneDelegate.swift file — this file is useful when we want to support multitasking (or multi–window feature) in our app — we won’t worry about that for now. When deleting the files from the project, you can choose the Move to Trash to have it removed from the project folder completely: Next, we need to tell Xcode that we do not intend to use Storyboards and remove the reference to the Main.storyboard file we just deleted from the application target: - Select the root project file in the Project Navigator - Select the General tab - Select the application target - Clear out the value for the Main Interface option Since we also deleted the SceneDelegate.swift file, we need to remove the reference to it as well. We can do so by editing the Info.plist file. Select the file and delete the Scene Configuration key located under Application Scene Manifest. Finally, we’ll edit the the AppDelegate.swift to create an instance of UIWindow to be able to display our initial UIViewController. Select the delegate file and delete all the methods except for application(_:didFinishLaunchingWithOptions:) -> Bool so that the file looks like this: import UIKit @main class AppDelegate: UIResponder, UIApplicationDelegate { func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { // Override point for customization after application launch. return true } } Now we just need to add our window, assign the root view controller, and we are done. Our AppDelegate now should look as follows: import UIKit @main class AppDelegate: UIResponder, UIApplicationDelegate { var window: UIWindow? func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { // Root view controller let rootVC = ViewController() // Setup window self.window = UIWindow(frame: UIScreen.main.bounds) self.window?.rootViewController = rootVC self.window?.makeKeyAndVisible() return true } } To confirm everything works, let’s select the ViewController.swift file and set a green background in the viewDidLoad() method: class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() self.view.backgroundColor = .green } } Hit ⌘ + R to build and run the project in the selected simulator or a physical device. When launched, you should see a blank green screen. We are now all set for writing our Auto Layout code! Custom Xcode project template While we can always repeat the steps above for every new project we create, there’s also another way. Xcode allows creating custom project templates so that a new project is already configured for us as we’ve seen above. Keith Harrison has created a template for a new project that does not use storyboards. Follow the steps shown in this GitHub repository if you want to save yourself a minute or two when creating new projects! Where to next - Intro - Basics, Part One - Basics, Part Two - Xcode Setup (reading now) - Stack Views - Custom UIAlert - Players Profile - Twitter Timeline - Twitter Profile - Music Album
https://marpies.com/coding-auto-layout/xcode-setup/
CC-MAIN-2022-21
refinedweb
587
54.93
. > 2) something that doesn't use namespaced tags to identify dynamic > scopes (clashes with #1) Would be nice. But since I don't like 1, no problem here. ;-) > 3) something that doesn't use the name taglib The name does have evil connotations. Do you have a suggestion for a new name? > > That's pretty much all you have to do to make me happy. > > -- > Stefano. > > > > Glen Ezkovich HardBop Consulting glen at hard-bop.com A Proverb for Paranoids: "If they can get you asking the wrong questions, they don't have to worry about answers." - Thomas Pynchon Gravity's Rainbow
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200412.mbox/%3C7AE08032-456E-11D9-AA5C-000393B3DE96@hard-bop.com%3E
CC-MAIN-2015-40
refinedweb
101
76.62
Today we released Release Candidate 2 of Team Foundation Server 2015. I believe this release is VERY close to the final bits we will ship. I’d encourage people to try out upgrades on production backups, pre-production environments and, production environments. We will support you regardless of the path you choose. Download: TFS 2015 RC2 Be aware that this TFS RC2 cannot be installed on the same machine as VS 2015 RC. It can be accessed by the VS RC just fine, but not installed on the same machine – this has to do with some versioning issues on some files that are shared between the two releases. Team Foundation Server 2015 is the biggest release we’ve shipped in a long while. On one hand, it’s a very nice update over TFS 2013 Update 4. On the other hand Team Project rename and some pre-work we’ve done to enable Team Project isolation (move, archive/restore, etc.) in the future has involved significant evolution in the schema and data access layers. Significant schema changes bring some challenges: - Data migration – We have a huge number of customers with a wide variety of TFS instances – some relatively new, some migrated version to version from TFS 2005, and some, I’m sad to say, hand tweaked by an overzealous DBA. Our schema transformation scripts have to be tolerant of many, many things and behave reasonably in every case. First and foremost, we can’t find ourselves in a position where we lose any data in the transformation and secondly we have to work hard to make sure no customer gets part way through and upgrade and finds themselves blocked. - Query plan tuning – TFS operates at insanely large scale. The largest internal TFS database is nearly 30TB. I don’t even know what our largest external database is but I know it’s in the same ballpark. When databases become very large, they are very sensitive to suboptimal query plans. A schema change means retuning all the query plans to the changes. For the past several weeks, we have been dedicated to driving “real-world” upgrades of production TFS systems to TFS 2015 – both internally and externally. We’ve been working through upgrades on our 70 or so large, internal instances. We’ve also been working with MVPs and key customers to do many dozens of additional mission critical upgrades. Along the way, we’ve found a lot of interesting/unexpected data shapes. We’ve learned a lot and we’ve fixed a bunch of bugs. “Release Candidate 2” now and will RTM as soon as we are ready after that. I’ve been watching the bugs that have been coming in as a result of our upgrade testing – the severity, the causes and the rate. I’m just not quite happy with where we are. Once an upgrade is complete and working, the product is very high quality, but we’ve continued to find, every week, a few more bugs in the upgrade process. The latest set were discovered as a result of some spurious network errors in the middle of an upgrade. This led us to realize that we had some remaining bugs in our upgrade retry/recovery process. As a result, we are executing a full regimen of “torture testing”/fault injection testing to force random infrastructure failures during upgrades and ensure that recovery is always complete. In the end, getting quality right is more important than hitting any date. As I write this, I actually don’t know of any bugs that would keep us from declaring this TFS 2015 RC2 build as RTM. We’ve fixed everything we’ve found. It’s just that I don’t feel certain that we’ve found everything yet. Before we call this RTM and tell the world that everyone should go upgrade their TFS servers, I want to get some more upgrades done, complete the fault injection testing and see the newly discovered bugs trail off to zero. This RC2 has many bug fixes, dozens of which are fixes for bugs we’ve uncovered in upgrade testing. At this point, we’ve upgraded well over 100 TB of TFS databases and this RC2 has fixes for every issue we hit along the way. Over the next few weeks, we are going to drive many more upgrades and, assuming the bug tail ends as we expect, we’ll prepare a final build and release TFS 2015 RTM. I apologize for any confusion given that we announced last week that TFS would release on July 20th with Visual Studio and .NET. Given what I outlined above we wanted to take a little extra time to ensure a seamless upgrade to TFS 2015 at RTM. However we will still ship Visual Studio 2015 and .NET 4.6 on July 20th and ship TFS 2015 as soon as we are ready after that. In addition to confusing the messaging, delaying the TFS release will make it a little harder to track our releases. We generally try to synchronize our VS, TFS and .NET major releases all on the same day so the story is super simple for customers. Because TFS 2015 RTM is delayed the story will be a little more complex. The only thing we are delaying is the TFS 2015 server RTM and the associated Project Server connector. All other TFS related deliverables – Team Explorer, Microsoft Test Manager, Team Explorer Everywhere, Test Agents, etc. will all RTM on or before July 20th. All of these will work just fine with your TFS 2013 server if you decide to start using them before getting the TFS 2015 RTM. Again, I’m sorry for the confusion and annoyance this creates. I encourage you to try out the RC2. While I’m not quite ready to call it RTM, it is very close and has stood the test of MANY upgrades at this point. Thank you, Brian Join the conversationAdd Comment Is it possible to migrate from the TFS 2015 RC version to the RTM version? @Francisco, Absolutely! Brian We will most likely upgrade from 2013 Update 4 to 2015 when it comes out as quickly as we can. Given the number of distributed teams we need to support and the required up-time that involves please take the time to remove as many bugs as possible. Whilst we can't wait to get our hands on the new features we'd rather not be chased across continents by angry managers and developers with pitch-forks because of upgrade related issues 🙂 Hi Brian A good decision. People soon forget late software… people rarely forget buggy software. That said, at SSW we have upgraded our internal TFS server to 2015 RC and its humming along nicely. -Adam Are there (or were there in RC?) any known issues when upgrading directly from TFS 2010? Completely in favor of taking time to get things right (as much as possible). Are system requirements, VS compatibility, etc. to the point where they can be announced, or will that arrive with the RTM? How about TFS2013 Update 5 RTM? How will affect the upgrade path to TFS2015? @Rinaldi, Yes, we have fixed some bugs that might affect upgrades from TFS 2010. It's not that every upgrade would be affected but there's some chance that any given upgrade would be affected. We've now fixed all bugs that we've found. Brian @Greg. P, we are rolling out system reqs, deployment guides, etc very soon. I'll have someone follow up with more info. Brian @DS19, Ahh, thanks for asking. No, this does not affect TFS 2013 Update 5. That release is on track for July 20th. Brian From what I can read it sounds like a good decision! And thx for the transparent communication! Archiving old closed out bugs, backlog items and user stories to not be queried would greatly speed up TFS queries. Closed out items are rarely ever accessed. An auto 'move to archive storage when US and all of its backlog items are closed out for 12 months' would help. @Greg_P_, we're publishing requirements & compatibility info later today. We'll link to that page from msdn.microsoft.com/…/overview. Will the download page offer other languages soon? Will there be other languages for RC2? My TFS 2015 RC and Release Management 2015 RC are both french instances. Can the same server host TFS 2015 RC2 and Release Management Server and Client 2015 RC? @Dominic, we'll be releasing non-English builds in about a week. I'm verifying the answer to your second question. Brian The new page on system requirements for TFS 2015 has been published here: msdn.microsoft.com/…/requirements. what about the state of Powertools for TFS 2015 ? currently process templates can only be changed by using witadmin from VS 2015 and manually modifying the XML file with a XML editor – no graphical editor for states transition or work item definition. @Allen "we're publishing requirements & compatibility info later today" Does TFS 2015 still not support SQL Server 2014 SP 1 Getting it right trumps any release date for this product. Hi Brian, Great job. ETA on 2015 Power Tools? @Nicolas, Our plan has been to release the Power Tools within a few weeks of TFS 2015 RTM. With the TFS RTM date moving out a little bit it's likely those two things will line up more closely. Brian Wanted to let you know I did a test upgrade from TFS 2013.4 to TFS 2015 RC2 and the upgrade went smoothly. Our collection database is about 220 GB and took about 8 hours. I had a problem upgrading to RC and worked with your team who resolved the issue and nice to see the fix made it into RC2 Thanks and looking forward to RTM Does TFS 2015 bring any change to client compatibility? Will Team Explorer 2008 be able to connect (with the forward compat hotfix)? Shay Hi Brian I also vote for quality over release date, so keep up that effort! We are currently testing both Visual Studio 2015 RC and XAML build using TFS/VS 2015 RC(1). This is easy because I can do this without affecting our production system. I am however wondering if you could give some input on how we can best test the TFS release candidates on our application/data tier (single server). 1. Install RS’s on our production environment when they are go-live supported – I don’t feel ready for that commitment. 2. Install on a test server. We have a test server similar to the production server, but I would like to leave it as close to the production server so that I can really test the RTM before the production upgrade. 3. Install on a test server where a snapshot has been created beforehand that I can revert to after test. Our IT department is not that found of snapshots – something about performance and space requirements 4. Install on a test server where a backup has been created beforehand of the configuration + collection databases while the service was stopped, do the test and then restore after a downgrade of TFS – this seems like the most complicated way, but I think that it is actually the one I can do. Managed to upgrade a machine from RC1 to RC2, but encountering issues on another one. After installation, I get the prompt when stariting TFS Admin Console: "This is a pre-release that cannot be activated. To continue……uninstall this release and install the final RTM version." The same message comes up when try to click on any "Configure Installed Features". I've already uninstalled, restarted and reinstalled TFSRC2 on the machine, wondering if anyone can advise on this. @Dominic, you should not install TFS RC2 (or future TFS RTM) on the same host as RC versions of RM server/RM client. There are some shared license keys that get overridden, if you do so, and this will cause the RM server/client to expire or not work. Cai Zonghe, can you send me your setup logs? edglas at microsoft.com. @2re, We do all kinds of combinations of 1, 2 and 4. Not so much #3, though I don't know that I know why – maybe because the TFS data itself in the SQL database is almost never on a VHD in our production systems. If you are interested in any advice on how to make #4 as smooth as possible, I can have someone follow up with you. Brian Does Visual Studio 2013 have any difficulty talking to TFS 2015 RCs? @DrLeh, VS 2013 will work fine with TFS 2015. The only limits I can think of are: New Team Projects can only be created with VS 2015 or later A rename of a Team project will work best if you've installed Update 5 of VS 2013 Brian Hi Brian I would love any help I can get! Besides seeing that the upgrade actually Works (of cause it does ;-)), I am also doing it to get an estimate on the time needed for production downtime during upgrade. Our test collection database (currently 420 GB) is an old copy of the production (currently 320 GB – yes, cleanup helps!), so it would at least give me some sort of ballpark timeframe. @2re, the best way to test your upgrade is: First install TFS RC2 on your test instance, but do not configure it. Follow the backup / restore steps documented here: msdn.microsoft.com/…/jj620932.aspx, to restore to the SQL instance on your test server. Prior to running the upgrade wizard, run "TFSConfig remapdbs" from the command prompt. msdn.microsoft.com/…/ee349262.aspx Run the TFS admin console to upgrade the DBs per the first backup / restore page. @Daniel, A couple of things. First, the Power Tools will ship in a month or so – I don't have a precise date but we are working on getting them ready now and they will include the process template editor. Secondly, I'm happy to say, we are finally embarking on a new process customization experience that is fully part of the product. It will be browser based and a natural part of Team Web Access. It will also provide a greatly simplified user experience. I've seen some initial mock-ups of it and the team is just getting started. We should start to see some early peeks this fall on VS Online and once it's complete enough, we'll deliver it in on-prem TFS. Brian We have our TFS on RC1. To build our projects we need to have Visual Studio installed on machine with build agent. Since our build agent is on the same machine as TFS, how can I install RC2? (I'm asking because TFS RC2 cannot be installed on the same machine as VS 2015 RC) Thanks, "A rename of a Team project will work best if you've installed Update 5 of VS 2013" What about VS 2010, VS 2008, and VS 2005 clients? What won't work if we have developers on these VS versions and we rename a team project? JB, to have TFS, build agent, and VS all on the same box, you would need to use VS 2013 rather than VS 2015 RC. I know that's lame, but it's a possibility. Another option is installing the build agent on a separate machine and then using the RC 1 build agent and RC VS 2015 on that separate machine. If neither of these will work for you, please contact me at buckh-microsoft-com, and I'll work with you to figure out what we can do to help you, as I'd like to have you using RC 2. Gouri: The following link should answer your questions about Team Project rename and older VS clients… msdn.microsoft.com/…/project-rename @Buck, thanks. So I just need to uninstall VS2015 RC and I should be able upgrade to TFS RC2 right? (VS 2013 is already there) Can we use TFS 2015 RC2 as build agent and TFS proxy of another machine with TFS 2013 Update 4, and or vice versa? TFS 2015 will have so many features we needed. Definitely will upgrade as the first chance once its ready Hi, Either I am missing it or once again the ability to archive projects has been left out. It's great to add dev features, but the admin side of these tools is becoming daunting. How about a little help for the usersadmins. Thanks Did I overlook the Release Notes somewhere? Thanks for the great work, Brian & Team. @Ed, we are currently working on releasing the Release Notes for this RC2. The download link Brian shared should be updated with the links to them and the KB early next week 🙂 Thanks for your patience. Hi Brian, In your blog post from the 27th April, "First TFS 2015 RC production upgrade I know of", you mention about a pre-upgrade tool for large databases. Is this included in the TFS 2015 RC2? I'd love to try it out if it is included Thanks, Andy I upgraded from RC1 to RC2 last night and everything worked flawlessly. Thanks a lot to everyone who worked hard to make this happen! Will the delay before the RTM open the possibility of Card Coloring (that was just released to VSO) making it into the on-prem RTM? My company is running a SourceSafe 6 instance for nearly everything. Because there was no time for it, we didn't migrate to TFS the last years. But now it seems that we will migrate. Did you test these migration-plans too? In my previous tests with the migration assistant (TFS2013) we got many errors and I don't know if we will finish the migration this way. Oh, and additonally: is it recommended to upgrade from SS6? Do we get a "clean" database after the migration or should we start from scratch and keep our SS6-Repository for historical views? Hi, I'm working on the Urban Turtle team and I'm trying to update the references to use the new version of TFS 2015 (RC2). I'm having two problems preventing me to do it. I created a new fresh VM and I tried: 1. Installing TFS 2015 RC2 first, then Visual Studio 2015 RC –> When trying to install Visual Studio RC, I get "Fatal Failure" every time 2. Installing Visual Studio 2015 RC first, then TFS 2015 RC2 –> When I launch Visual Studio, I get the message that my trial license is expired (something about "prerelease") and it's preventing me to use Visual Studio at all. Is there any way I can make this work? Thanks, Luc @Luc, No, You can't install TFS RC2 VS RC on the same machine. There was a change in the licensing DLLs that make them incompatible on the same machine (but not on when different ones). Send me your email address at bharry at Microsoft dot com and we will get you unblocked. Thanks for getting Urban Turtle updated. It's a nice tool Brian @TheodorKleynhans – no, card coloring will NOT be part of RTM, it will be part of the next TFS 2015 update. -Ravi @Theodor, I'm afraid not. We won't be using the delayed RTM to bring in any additional features. We'll just be focusing on the last few bug fixes. Card coloring will be in Update 1 (we deployed it to VSO this week). Brian @Andrew. It isn't. We've built it but the plan is to make it available separately. It's really only applicable to customers with pretty large databases. Send me email at bharry at Microsoft dot com and we'll get you a copy to try. Brian Man, you guys are awesome. Thank you so, so much for pushing the release until you're sure the upgrade works. @Daniel, where there's no SP minimum specified, you can use the service packs that are available. I'll clarify that in the doc today. @Shay, yes the 2008 TE client connects to TFS 2915. There are some notes on that here: msdn.microsoft.com/…/requirements @DrLeh, VS2013 works with TFS 2015. msdn.microsoft.com/…/requirements Brian, THANK YOU for making the right decision, the decision for quality. I am happy if you take an extra day,week, month, or year to get the quality where it needs to be. Thank you for the transparency and honesty. This is extremely refreshing to see and I for one now have much greater confidence. We plan on doing our test upgrades in late October and deploying 2015 over the Christmas/NewYears downtime. Thanks again, can't wait to give team project rename a try with our development teams. Thanks again for all the hard work and the transparency. Bravo! Is the upgrade a side by side or in place? Also the Stored Procedures in the past were encrypted so the query plans did not exist, will this still be the case? Dusty @Vijay & @Barry, Well will it be ok if i install both RTM versions of TFS and RM Server/Client onto the same server or should i think about seperating those two services? @Allen what is "Windows Server 2011" on page TFS 2015 Requirements and Compatiblity ? @Allen "where there's no SP minimum specified, you can use the service packs that are available. I'll clarify that in the doc today." in TfS discussion forums there're many issues about SQL Server 2014 SP1 and the issue occurred only when SP1 for SQL 2014 was installed – similar thread exists for SQL 2012 with SP2. @Dusty, TFS has never supported SxS installs on the same OS instance. You can only install one instance of TFS per machine/VM. Yes, the sprocs are still encrypted. That doesn't mean there aren't query plans, it just means you really can't interpret them. Brian @Dominic: Once TFS RTM is released, it is ok to install the RTM versions of TFS, RM server, and client on the same box. Brian the card coloring is a disaster on VSO. I have to make a rule to change card coloring?!?! I just want to click and change the color of a card, it should be easy, but it's not. Also the coloring just seems off and weird. "Card styling allows to visually draw attention towards specific cards. Add, edit, and reorder rules for card styling. Cards that meet multiple rule criteria will display with the style of the first rule in the list. " Why are you making something that should be easy so hard? groan….. I installed RC on a machine a few weeks ago. I just uninstalled it, downloaded RC2 from the link above, and now I'm being prompted to enter my license information. I wasn't before. What gives? @Dan, RC's generally don't include licenses. They expire at a fixed point in time and can't be made perpetual. Because this RC2 is so close to the RTM, the licensing infrastructure has actually been changed from the RC licensing infrastructure to the RTM one. That means there's no fixed expiration date but rather you have two choices – go with a trial or enter licensing information. For now licensing information isn't available – and won't be until after RTM. Go with trial. It's 90 days plus a self-serve 30-day extension. I don't see any world in which we don't RTM WAY before that and have licensing keys available for you. At RTM you can just upgrade to the RTM build and enter the key. Brian Thanks Brian. Just so I'm totally sure, we're going from 2010 to 2014 and it was a long time ago, but I don't remember entering licensing info for that server. When I check on that now, it shows "Volume License". So i'll be ok if I do the trial on the new machine? I'll be able to get a key from the my MSDN product keys area after RTM? Sorry, meant to say we're going to 2015, not 2014 @Horui – appreciate the feedback. The scenario you mentioned is a "manual and just-in-time" operation – I acknowledge the simplicity around it. However, I have to keep repeating the operation every time I need to change the color of a card. Picture these scenarios (as a sample) now: I want the cards to keep changing colors depending upon how close they are to the Finish Date. I want to highlight the cards that are being worked in the Current Iteration I want to highlight any card that has a "blocked" tag on it I want to highlight cards that have not been touched for "x" days For all the above, I can configure the rules one time and then the rules are applied as and when the criteria is met. I don't have to do any manual operations over and over again. Note: The support for macros (@CurrentIteration, @Today) is also coming in the next deployment. This approach allows you to get visual feedback on cards that may be in your critical path of work. This is similar to the email/outlook rules you define once to deal with the flurry of emails coming in. Your comment on the color palette is acknowledged – we have updated them to have brighter colors and you will see them in the next deployment. Thanks -Ravi @Dan, Yes, Volume licensing includes a "pre-PIDed" SKU which means the licensing key is baked into the product. That will continue to be true for TFS 2015, so when we RTM, the VL download site will have a pre-PIDed copy and when you upgrade from RC2, will automatically be licensed. If something strange happens with timing (like you are in some time critical phase of your project when the RTM happens) and the trial is nearing expiration, you can contact us and we can extend your trial to a more convenient upgrade window for you. Brian @Daniel, I could be wrong on that. I'm double-checking now. @Daniel, Windows Server 2001 Small Business Server: msdn.microsoft.com/…/gg490793.aspx @Daniel, yes we support service packs as soon as they release. The doc update will be live very soon. Here's what it will say: it before the service packs release. @Daniel, or Windows Home Server 2011 @Ravi why not have the metaphor on the card be "apply coloring to cards that are similar to this one" and pre-fill the rules that match the fields on the card? Also, why can't I just change the color of the cards with 1 click on the board itself without having to open each work item form. If you make it ***easy*** people will change colors as they need…the feeling with rules…make it feel like changing colors requires permission and admin approval. If you make it easy…people will use it….look at trello. Trello makes colors and stickers SUPER easy. @Houri – appreciate & acknowledge the feedback of making it simpler to add rules, I am adding it to the product backlog. Thanks -Ravi I'm using the code sample of VSO REST API to queue a vNext build on TFS 2015 RC2. When I enter an ID of a vNext build definition I get ""TF215016: The build definition 2 does not exist. Specify a valid build definition and try again." (works perfectly well for XAML builds). Is there any way around this issue? Do we have tools available from Microsoft to migration from TFS 2008 to TFS 2015? @Amir – what version of the REST API are you using? 2.0 supports all the new build functionality. We're waiting on the docs to publish at msdn but you can also open fiddler when using the web UI. Our web UI uses all the public REST APIs so it's a good reference example if you watch the traffic (look at headers for version as well). You can email me bryanmac at microsoft with code if you still have issues. @Bryan – I didn't know you use the REST API on your portal. Without knowing that, there was no way I could have done it. The API is completely different. To queue a new build one must use build/Builds (and not build/requests) with a different set of JSON objects as parameters. Thank you for your help! Btw if anyone is encountering the same problem – see here more info regarding the solution: stackoverflow.com/…/how-to-trigger-a-build-in-tfs-2015-using-rest-api Bryan – the reasons I wanted to trigger a build using REST API is because we have a Gated check in and once it's completed we want to start deployment using vNext definition. Is there a better way to do this? And are you guys planning to add gated check in trigger to vNext (which is the only reason we currently have a XAMLdefintion)? @Manish – while direct upgrade from TFS 2008 to TFS 2015 is not supported, you can certainly accomplish this upgrade in multiple "hops." That is, if you first upgrade from TFS 2008 to TFS 2010 SP1, you can then upgrade directly from TFS 2010 SP1 to TFS 2015. No special tools are needed to accomplish this beyond the relevant builds of TFS. @Amir The API docs are in a branch awaiting a pull request. Should be soon and I just touched base with folks to ensure it gets out there soon. We knew that a successful API means it's one our product uses so fiddler on the web UI is pretty definitive 🙂 but docs will help explain the relationships etc… The new UI working with Xaml builds is the same. We worked hard to make sure the compat is there for 1.0 and xaml concepts did not change in 2.0 api. The 2.0 API working against new build definitions are different where they need to be different (build vnext doesn't have the concept of a request, xaml has no concepts of tasks or process json etc…). But note that we did work hard to ensure where they are the same (definition, build, etc…) they are the same. In other words, there is one /build namespace and there is one /builds/definitions resource. @Ben S – Regarding Visual SourceSafe migration, if you're trying to migrate your history, you'll need to use this migration tool (visualstudiogallery.msdn.microsoft.com/867f310a-db30-4228-bbad-7b9af0089282). If you're still on VSS 6.0, that might be the reason you hit issues – the tool requires VSS 2005. Also see this topic on how to migrate: msdn.microsoft.com/…/ms253060.aspx Re: clean migration – if you use the tool, there are options to migrate with history or copy just the tip. If you migrate history, the tool will attempt to replay the history from VSS into TFS. This process isn't perfect, so if leaving the VSS repo around for historical reasons is an option for you, I highly recommend doing so. TFVC and VSS are very different version control systems, and I think that moving between systems is a good opportunity to reconsider and update your branching strategies, code structure, etc. Here's a doc with some guidance about best practices for TFVC: vsarbranchingguide.codeplex.com @Barry Just to let you know that the French download page offers download to the Frenc RC and not RC2 version while the english download page offers the French RC2 version of TFS 2015 🙂 Check it out:…/visual-studio-2015-downloads-vs…/visual-studio-2015-downloads-vs @Barry after downloading the french ISO (tfs2015_server_fra.iso) i was surprise it wasn't mentionning RC2 anywhere… is it the RTM build? The setup screen is only mentionning Team Foundation Server 2015 and the build is 14.0.23107.0. Should i proceed with the installation? Is this the actual RC2 version? @Dominic, That sounds like mistake in the download page. It should say RC2. I'll look into it. Yes, the actual software doesn't say RC or RC2. That's an artifact of a relatively late decision to go with RC2 vs RTM. We chose not to go back and switch the in-product branding to RC – so instead, it says neither RC nor RTM; it just says TFS 2015. You will still be able to tell which version you have by build number. The final RTM will have a newer build number. Brian Does the upgrade process take into consideration a change of domain at the same time as the TFS upgrade? I'm doing a trial upgrade from 2010 ->2015, and after leaving the Wiz running all night, its stuck at step 979 of 1538 on most of our collections (seems only the ones with content). This has been running all night (~17hours) I did a trial from 2010->2013 last fall with the same TFS backup files (project was put on hold pending 2015 release) and the upgrade only took a few (~4) hours. Thanks Aaron Hallberg for your comments. Which tools can be used to migration from TFS 2008 to TFS 2010? We are using TFS Integration tool, Not sure if this is suggested tools, facing few issues. It would be great if i can get tools available from Microsoft to complete this task. @Manish "Which tools can be used to migration from TFS 2008 to TFS 2010" you need to use TFS 2010 preferrable a instance which have no TFS collection already. @Manish – We typically distinguish *upgrade* from *migration*. Upgrading from one version of TFS to another will preserve all of your data – version control history, work item history, relationships between work items and changesets, etc. Migration tends to be lower fidelity and should typically only be used when upgrade is not an option for some reason. For example, see the notes about upgrade vs. migration and limitations on the TFS Integration Tools homepage here: visualstudiogallery.msdn.microsoft.com/eb77e739-c98c-4e36-9ead-fa115b27fefe. To *upgrade* from TFS 2008 to TFS 2010, just follow the instructions here: msdn.microsoft.com/…/dd631912(v=vs.100).aspx. The basic idea is that you are preserving your TFS 2008 databases, installing new TFS 2010 bits on your application tiers, and then running through the upgrade wizard in order to upgrade your databases to the new 2010 schema. The process will then be similar for upgrading from TFS 2010 to TFS 2015. We are in the process of getting our TFS 2015 upgrade documentation posted, but to get a sense of what it will look like you can see the TFS 2013 upgrade documentation here: msdn.microsoft.com/…/jj620933.aspx. @Scott – you should not typically attempt to combine an upgrade and a domain migration. See here for some more information on domain migrations: msdn.microsoft.com/…/ms404883.aspx. On the speed of the upgrade – going from TFS 2010 all the way to TFS 2015 will take a fair bit of time, since it is doing multiple quite expensive schema upgrades in SQL space along the way. Given that your trial upgrade from 2010 to 2013 only took four hours, however, it certainly sounds like something has gone wrong. If you'll reach out to me at aaronha@microsoft.com I can put you in touch with somebody to help take a quick look at what might be happening. Is there any way to upgrade a few team projects from TFS 2010 to 2015 (without touching any of the other team projects in the collection)? We want to take this upgrade as an opportunity to clean out the collection. Currently, we're thinking of just breaking off completely and installing a new instance of TFS 2015 and copying over the repository items that we want. This option will cause us to lose history and force us to connect to the old repository every time we want to look up historical information. @Jennifer S – there is no mechanism to upgrade some projects but not other in a single TFS deployment. You can, however, do a combination of a collection move (msdn.microsoft.com/…/dd936138.aspx) and split (msdn.microsoft.com/…/dd936158.aspx) in order to leave some projects behind on an older server and them move the other projects to a new server which you then upgrade. For example, you could: 1. Detach the collection from the original server and back it up. 2. Re-attach the collection to the original server and delete the projects which are going to be moved/upgraded. 3. Attach the collection to a second server at the same version as the original and delete the projects which got left behind. 4. Upgrade the second server to TFS 2015. With some slight modifications you could adjust that plan so that the original server is the one that gets upgraded and the "archived" projects move off to a new server. As with any operation like this, make sure you have complete and consistent database backups before you start so that you can recover if something goes wrong. Hope that helps. Thanks Daniel. Thanks Aaron Hallberg for your reference link. I will try it out on our TFS 2008 server. One question: Separate TFS collection will be created for each of Team Project within TFS 2008? If yes then we need to use msdn.microsoft.com/…/dd936158(v=vs.120).aspx link for splitting Team project into separate collection OR keep them in single collection and provide access rights accordingly ? @Barry Is it normal the installation of the RC2 version asked me for a TFS licence code? I've selected the "trial" option… @Manish TFS 2008 does not support something like a TFS collection. if you migrate your TFS 2008 the current TFS 2008 Team Projects from one instance will convert to a single TFS 2013 collection containing the Team Projects (the collection will be called DefaultCollection but can be later renamed if you're using multiple collections in your new TFS instance). @Dominic "Is it normal the installation of the RC2 version asked me for a TFS licence code? I've selected the "trial" option…" yes – there're no product keys for the RCx available. when upgrading to RTM you need to enter product key or use the PID depending of the source of your installation medium. see Brian's comment at blogs.msdn.com/…/team-foundation-server-2015-rc2-available.aspx Hi Brian, I've tried to migrate our 1,5TB TFS database from TFS 2013 Update 4 to TFS 2015 RC2 and got a conversion error (character to uniqueidentifier). Do you have a special technical contact address where we could track down this error or should I contact the regular MS support? Thank you, Joachim Thanks Matthew, indeed leaving the VSS repo around for historical reasons is an option and I think we will do this. This is also an excellent moment to leave some projects behind that are abadoned for about 10 years or so. That helped me and now I have a statement to go into discussion 😉 @Joachim, Please send me email at bharry at Microsoft dot com with the details and I will have someone contact you. Brian Thanks @Aaron. That is very helpful and sort of what I feared we'd have to do. Hi Brian, I'm doing an upgrade of two small collections and the Update progress stops at step 345 and keeps waiting. What is going on here ? (Upgrade from TFS 2013 update 2 tot TFS 2015 RC2). Thanks, Rik I did a test upgrade of TFS 2013.4 to TFS 2015RC2. Yesterday everything was ok. Today I suddenly get the error that my trial has expired and I can't access the web portal anymore. Any idea how to fix this? Hi Brian, After looking into the problem in vw_ServicingStepDetail the project collection upgrade wizard was waiting on: WorkItemLongTexts to Fully populate the Full-Text index. This was not happening automatically even after a long time-out. Is this misconfiguration on our part of SQL Server or a bug in RC2 ? Best Regards, Rik Meijer — Hi Brian, I'm doing an upgrade of two small collections and the Update progress stops at step 345 and keeps waiting. What is going on here ? (Upgrade from TFS 2013 update 2 tot TFS 2015 RC2). Thanks, Rik Are there hybrid typologies for TFS? Example: Can I host TFS App and DB on Prem, Build on Azure, RM on Azure? Does it ever make sense to do this? Or is it simply "ALL" on Prem or "ALL" on Azure or "ALL" VSO… ? @David, There are lots of hybrid variations – starting with you can host TFS on-prem, on IaaS (Azure, Amazon, …) or use VSO. Within all of those options, you can some hybrid scenarios. The most common are around "agents" – build, test, release, etc. The agents can generally run anywhere you like – on prem or IaaS, regardless of which TFS hosting solution you choose. Brian @Rik "I'm doing an upgrade of two small collections and the Update progress stops at step 345 and keeps waiting. What is going on here ? (Upgrade from TFS 2013 update 2 tot TFS 2015 RC2)." could you try to upgrade first to TFS 2013 Update 4 and than to TFS 2015 RC"? (there was a significant schema change between Update 2 and Update 4 and maybe that could lead to the issue.) @Rik/@Daniel/@Scott/@Everybody – The upgrade to TFS 2015 is expected to be relatively slow, since there are major schema changes in support of team project rename. If your upgrade appears to be stuck, you can use TFSConfig Jobs /DumpLog /CollectionName:<collection name> (see msdn.microsoft.com/…/ee349266.aspx for more info) to get more details on what is currently happening. We have had two folks hit an issue where their upgrades get stuck waiting for the full text index to populate on the table WorkItemLongTexts_Dataspace. If your upgrade gets stuck in that spot, please reach out to me at aaronha@microsoft.com and I can put you in touch with some folks that can help get you unblocked. There are several other steps during the upgrade that are expected to take a fairly long time – migrating version control data, migrating framework data, etc. If the log shows that your upgrade is running one of those steps, give it some time and it should start moving again. Hi Brian, In the meantime we could successfully migrate our DBs to TFS 2015 RC2. It turned out that we used some wrong security tokens in combination with TfsSecurity tool. Thank you for your excellent support – it was a pleasure to work with! Best regards, Joachim Glad to hear it @Joachim. Brian I am unable to download TFS 2015 RC2, the download from…/download-visual-studio-vs only takes me to the Microsoft home page. @Ed, thanks. We'll look into it. The same thing happens for me. Brian @Ed, we figured it out. It got broken in all the publishing of RTMs for today. We're getting it fixed now. English is already fixed and other languages are in progress. Brian Can't download ISO file of TFS RC2. Just redirect me to microsoft home page. @Ivan, there is a problem with the link on the page, I am working on getting it updated now. I had to add an Assembly Redirect for TfsJobAgent to get the upgrade going for System.Web.Mvc 4.0.0.0 to 4.0.0.1. No idea why. Also for the Web Tier. @Aaron Hallberg: Thanks Aaron for your comment on the WorkItemLongTexts_Dataspace problem. I have resolved the endless wait by populating the index myself. And the project collection update wizard almost instantly finished. After this I checked the configuration of SQL Server and found that SQL Full-Text Filter daemon was not running. After starting this service, the problem is not reproducible anymore (when doing a new database restore). Could this have been the problem and if yes, could the upgrade wizard check if this service is online? Best regards, Rik I might have missed something, but is it still possible to install a build controler OUTSIDE of the TFS main server ? like it was the case with 2013 : msdn.microsoft.com/…/ms181712.aspx There is no option to do that during the installation process (wizzard). we have a server for TFS, a aserver fot the database and one dedicated build server per collection, can we do that with TFS 2015 thanks @Nicolas Industrial Alliance, Yes, it is. A bunch of detail… First, note, that we've eliminated build controllers. We used to have controllers that managed one or more agents and you had to think about both. We eliminated controllers and now just have agents. It makes management easier, improves our high availability story, etc. By default an agent is installed on the TFS server when you install TFS but not enabled. It does nothing if you don't enable it. You can enable it and run builds on your TFS server or, like I'd expect most teams of any size to do, you can install agents on other machines. If you go into the admin part of our web experience (gear upper right), then go to the root of the control panel and the Agents tab, you'll find a button to download an agent onto a build machine. That's the new install process. So you don't have to mess with ISO images, etc. Here's some docs on the process: msdn.microsoft.com/…/windows Brian @Rik – Thanks a bunch for following up. I am not sure if that is related or not, but I will share the information with the relevant folks so they can have a look. For 2015 RTM we took a fix for this which will cause the upgrade to succeed (with an appropriate warning message) even when the full text index cannot be populated. The SQL team advised us that we were unlikely to be able to successfully automate anything that would *repair* the issue – their best advice was to point people to the full text crawl logs so they could get more information and resolve the issue externally. If there are things we could detect up front though, in a readiness check, I agree that would be useful, and we should certainly consider it. Thanks again. Hi @LunicLynx, what was the error you were seeing before doing Assembly Redirect? We just did an upgrade to TFS 2015 RC2 and it seems to have completely broken nuget package restore. We have some new build definitions using build.preview to build some VS 2015 projects. For those builds, the nuget package restore simply does not run. There's no error or an indication that something is malfunctioning. It just doesn't restore any of the packages, which cause the build to fail. I tried adding a nuget package restore step but that exhibits the same behavior, no errors but also no packages get restored. The nuget package restore also now fails on our old xaml builds that were working in TFS 2013. For those we get the error: C:A840src.nugetNuGet.targets (100, 0) The command ""C:A840src.nugetNuGet.exe" install "C:A840srcfoopackages.config" -source "" -NonInteractive -RequireConsent -solutionDir "C:A840src "" exited with code 1. This, of course, also caused those builds which were working previously to fail. Hi, i still found a bad bug within tfs 2015 RC 2 / Team Web Access. I have created (in 2015 RC1) a default collection with two Projects (Project-A and Project-B) with different 4 Users. Project-A (User-AA, User-AB) Project-B (User-BA, User-BB) If i open the TFS Website and create a new query for workitems i allways find the users (User-AA, User-BB, @me and the Administrator). It doesn´t matter what Project i open (Project A or B) always the same users …. Will this be fixed in RTM ?! @Brian Found a very small "bug". Actually it is not a bug it is a wrong documentation. This documentation says wrong variable names msdn.microsoft.com/…/variables BUILD_BUILDDEFINITIONNAME is actually BUILD_DEFINITIONNAME BUILD_BUILDDEFINITIONVERSION is actually BUILD_DEFINITIONVERSION Sent it on connect: connect.microsoft.com/…/tfs-new-build-system-bug-with-powershell-environment-variables @Dominic, Thanks. We'll look at it. Brian @Brian : Could you please clarify "eliminated build controllers"? The document you referenced says "you can continue to use XAML builds, controllers, and agents." Were you just referring to the new build system? @Greg_P, Yes, I was referring to Build.vnext. We didn't make big changes to Xaml builds. Brian @JuergenB – the behavior as you describe it has not changed from previous releases and is as designed. This is because we support cross project queries and we currently do not scope the field pickers in the query editor per project. Security is still enforced. If you have follow up questions please do not hesitate to contact me at valenker@microsoft.com. Valentina Hi Brian, I upgraded our TFS 2013 Update 5 to 2015 RC without any problems. Just a question: Should it be possible to build C# 6 code with TFS 2015 RC2 Build Agents or do I have to wait for RTM? Right now it is not working. I even installed Build Tools and than complete Visual Studio 2015 RTM on the Build Server but still it is not compiling C# 6 features. Hi Markus, This should definitely work. If it works on your dev environment, it should work on the build machine. Have you tried running MSBuild directly and seeing what happens. Perhaps, there is a slight difference in how VS is calling MSBuild and how Automated Build does. You should be able to see the MSBuild command line that the automated build uses in the logs. If you continue to have problems, contact me (jpricket at Microsoft dot com). Thanks, Jason (part of the TFS Automated Build team) My group is upgrading from TFS 2013 Update 2 to TFS 2015 RC2. We have VS 2013 installed on all build servers. Assuming we don't upgrade our development machines for some time to VS 2015 and we continue using VS 2013, will we currently need to install VS 2015 on the build servers or should VS 2013 still be sufficient for the project dependencies we have run into (which is why we have needed to install VS on the build servers)? @John, VS2013 should be fine. Brian We upgraded from TFS 2013 u2 to TFS 2015 RC2 today but are having problems running XAML builds. We use custom build templates that worked in TFS 2013 but when running the same build definitions in TFS 2015 RC2 the build fails almost instantly. When running any of our builds we get errors similar to the following: TF215097: An error occurred while initializing a build for build definition abcxyz: Exception Message: Cannot create unknown type '{clr-namespace:TfsBuildExtensions.Activities.TeamFoundationServer;assembly=TfsBuildExtensions.Activities}GetCodeCoverageTotal'. (type XamlObjectWriterException) The "type" varies by build definition but the error is consistent across our builds. We've spent time today trying to figure out how to get around this but are at a loss, do you have any ideas on how to get past this? Epics backlog does not exist. After upgrading TFS 2013 U2 to TFS 2015 RC2 the new "Epics" backlog does not exist. I've checked the team settings page and only see "Features" and "Backlog Items" and their checkboxes. Is there something else that needs to be set for the Epics Backlog to appear? I have checked multiple project collections and team projects. In one team project we did previously add a 3rd backlog level, could that have prevented Epic from being added to all collections and projects? @John Hughes: the Epics backlog is not added to your team projects automatically when you upgrade to TFS 2015. There are 2 steps that you need to take to enable it: 1. First you need to add the work item type, and change the process configuration to add the Epic portfolio backlog: msdn.microsoft.com/…/dn217880.aspx 2. You need to turn on the Epic backlog for the teams who want to use the Epic backlog: msdn.microsoft.com/…/organize-backlog Upon upgrading from TFS2015 RC to TFS2015 RC2 I found a little glitch where the Backlog Explorer is completely empty in projects without Team Members. Not a worrying situation but I thought I'd mention it. Adding team members and refreshing restores the functionality. See my blogpost for more details: dietercamps.blogspot.nl/…/tfs2015-rc2-upgrade-backlog-items-not.html @Dieter, Thanks. We'll look into it. Brian @Ewald, Thanks, I misread the notes on MSDN for SAFe. Are there any known situations where TFS 2015 RC2 does not save/present build logs for completed builds? I have done a test upgrade of our companies TFS 2010 TPCs and then created a new TPC with a new project and uploaded some test code into source control (git). I created a vNext build with an MSBuild step and a Publish Build Artifacts step. However after running the build there is no way for me to view the build's output and the ZIP from "Download all logs as zip" only has an empty "Build" folder in it. I'm pretty new to TFS, so is there something I'm missing to get TFS to save the logs? There are no steps listed in the left column either during or after the build runs. @ Daniel Did the build run? (Did it say success with steps on the left?) What's in your build definition? You can email me bryanmac at microsoft and I'll follow up. Today, on August 3, the first working day of August, TFS 2015 RC1 is not working anymore because the Trial License has expired!! What happened? I was under the impression that the RC1 was in trial for 90 days? Will installing the RC2 fix this problem? Please help! @roel, we are having the exact same problem today. Been onto Microsoft support via IM and they said there are no RC/RC1 keys available, evin in my MSDN product keys area. They said I had to try forums and someone would answer within 2 days. This is affecting our live/production instance and is not an option. I'm trying to do the upgrade from RC1 to RC2 but the installer isnt detecting the existing install 🙁 @Kyle, roel got unblocked by upgrading to RC2 (I had an offline conversation with him). Pre-release builds generally expire. I don't know what date we picked for the RC but, given that you see it expiring, it must have happened recently. Unlike a trial, there's no way to extend a pre-release build. My suggestion would be to install RC 2. That should be good for 3 months and RTM will be available within the next week. Brian @Dieter thanks for reporting the issue faced. The root cause is cached JavaScript files in the browser, The cache should expire after an hour or can be refreshed by Force Refresh in browser (Ctrl + F5). We have logged a bug to address this for future release. I upgraded from TFS 2013 to TFS 2015 RC2 and I don't seem to see any of the new configuration options for the Boards? I don't see the Settings Gear anywhere even though I am a Project Administrator, TFS Administrator, etc. Are the Board Cards Configuration (Adding Tags) in the TFS On Premise version of TFS 2015, or only On-Line version? Why wouldn't I see the Settings Gears as shown here: msdn.microsoft.com/…/customize-cards Also, after the upgrade, what templates provide the Epic Work Items, out of the box? I don't think it is in there for existing projects after the upgrade, but I thought it would be there for new Scrum or Agile projects? @Tom McCartan, The TFS 2015 RC2 version contains all the features listed as "2015" on this webpage:…/release-archive-vso This includes all the features you listed. The gear icon should appear on the boards view. Be sure you are a team admin on the team you are viewing the board for. The Epic work items are available for all out-of-box process templates. However, they are not turned on by default. You can turn them on by following the instructions on this MSDN topic: msdn.microsoft.com/…/organize-backlog Look for the "Activate backlog levels for your team" section. I hope this helps. Let us know if you have more questions. @Gregg, Thanks for the reply. Also, we have many existing builds configured and after the upgrade we are getting the following error in our build log? Is this something that did not configure correctly? Where would this assembly be and why is it missing?) @Tom McCartan You can fix this by either install VS 2015 or MSBuild 2015 on the build server. In RTM we late bind against the MSBuild object model for locating the MSBuild versions on the machine so VS 2015 and MSBuild 2015 is not a requirement. @Greg and @chrispat – I was able to resolve both issues by Installing VS 2015 and by Repairing the installation of TFS 2015 and Re-Starting the server. Thanks for all your help. Power Tools? ETA? We have upgraded TFS 2013 to TFS 2015 RC2 and all our existing builds get the following error:) … If we make a new build definition with the default template it works! On our build servers we also installed VS2015. We cannot find the problem, any idea how to fix this? @Ralph Welling That was due to an issue in RC2 that required you to install the 14.0 build tools. We fixed this in the RTM. Hi all, we upgraded to TFS 2015 and so far all went pretty well. I am just fiddling with our build servers as they won't tun any tests now due to "TF900547: The directory containing the assemblies for the Visual Studio Test Runner is not valid" with no given directory. When researching this error I came to some posts for 2013 which suggest to install the Visual Studio on the build agents respective install the TFS Build Tools. I had the Visual Studio Premium 2013 installed on the machines before and all builds were looking good. I tried to fix configuration by repairing and installed the TFS Build Tools 2015 on the machines – still no luck. Is there any documentation what the Build Tools include actually? I found nothing in the web. Additionally anybody who can help me out here? Regards, Carsten @Carsten B. The build tools are just MSBuild and won't contain any of the test framework pieces. You will need to install Visual Studio to get all of the necessary test framework bits. Hey @chrispat, now with my real account.. 🙂 First: thanks for your answer. I probably did not make this very clear in my post, but the Visual Studio 2013 was installed on every build machine before. I had just upgraded the TFS 2015 sources and after that my build definitions which were working correctly failed with the given message. Therefore I tried to "repair" the installation with the setup, no luck here. My other guess would be to completely uninstall and reinstall the Visual Studio on the machines. I am just wondering why the setting for the path was removed during upgrade and where it may be set manually. I haven't found any clues in the configuration console nor in the Registry of Windows Server. Obviously the setup process of Visual Studio should set the value correctly, I am just trying to prevent the time-consuming re-install. @Carsten Büttemeier You will need to install VS 2015 on the box for the test runner to work. The test platform is delivered as part of VS and you need the matching version to work with the XAML build system. This is something we have addressed in the new build system shipped with TFS 2015.
https://blogs.msdn.microsoft.com/bharry/2015/07/07/team-foundation-server-2015-rc2-available/
CC-MAIN-2017-22
refinedweb
10,087
72.26
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Is it possible for a report to use data from two unrelated tables? Hello, I don't seem to finish understanding how reports work, or at least how to make them work in the way I need. I developed a custom module with multiple tables, and what I need is a report which takes data from different queries to those tables and then produce a PDF. I understand how to do it for just 1 table. In my ideal scenario I would create different queries to each table and then produce a report. I don't seem to find a way to include in a report data from tables which have no relationship between them. For example lets say I have a table for "Expenses" and another one for "Income". They don't have a common key between them. Is there any type of report I could use to produce a PDF with the query to both tables? Thanks for any tip! By default in your reports, you can access all the object linked to the main object (o, the one you associated the report to, i.e. an invoice). This means for instance, that you can access the partner object by doing o.partner_id, and a partner category object with o.partner_id.category_id. Since there are fields related to accounting on the partner form, you can access this way more or less every object. But, if you don't want to take this "dirty" way, you can use a parser. Take the invoice report definition for instance ( addons/account/report/account_print_invoice.py): ) the self.localcontext is always passed to the report, then you put a key (i.e. sale_orders) inside this dict, with value all the SOs i need. Obviously, you'll have to populate this value with the list of object you need loop on. And you can do this by searching the sale orders with, for instance: from openerp import pooler pooler.get_pool(self.cr.dbname).get('sale.order').search(self.cr,self.uid,[]) Inside [] you can put your domain to filter the results. This way you can print whatever you like in your report. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/is-it-possible-for-a-report-to-use-data-from-two-unrelated-tables-6392
CC-MAIN-2017-26
refinedweb
413
65.62
XMLBeans Version 2 Plans Overall XMLBeans Vision in relation to V2 XMLBeans is about providing access to the full power of XML to Java users in as user friendly manner as possible. XMLBeans Version 1 provides Java/XML binding that supports near 100% of XML Schema features as well as full access to the XML Infoset via the XmlCursor API. Version 2 is more evolutionary than revolutionary, it is a continuation of the XMLBeans Version 1 work which includes expanding on the core XMLBeans Java/XML binding features as well as the underlying XML Store. Perhaps the most significant feature that has deep architectural impact is DOM Level II support over the XML Store. As with Version 1, but more so with Version 2, XMLBeans can be viewed as a model for access XML in Java in general, both a Java/XML binding strategy for accessing XML when XML Schema(s) are available, as well as an API for direct access to the XML via DOM or XmlCursor. The following section outlines goals for XMLBeans in Version 2. Specific V2 Objectives DOM Level II Support - In Version 1 the only way to get a DOM Node for an XmlObject or XmlCursor was to use the newDOMNode API which copied from the XML Store into a new DOM Node. This is a fairly innefficient operation and the DOM Node was disconnected from the underlying XML Store so changes to the DOM Node were not reflected into the XML Store. There have been a significant number of requests for XMLBeans to support DOM natively since lots of tools (and developers) know how to work with DOM already and do not know how to work with XmlCursor. Also, SAAJ 1.2 now relies on an underlying DOM and having a SAAJ implementation on XMLBeans should be possible (since SAAJ represents a non-lossy SOAP Envelope Document). In Version 2 DOM will be implemented natively, meaning the tree within the XML Store will be able to represent DOM objects. Note that XmlCursor will remain fully supported so users will be able to switch between DOM, XmlCursor, and XmlObject (either untyped or typed). Extensions - In general XMLBeans generated interfaces have been pretty static, in large part due to the XMLBeans overall objective to correctly support the XML Schema type system (including the custome types defined in the schema) in Java. You can map target namespace/package and element/property names but that was about it. In Version 2 (this may be ported to Version 1 as well) you will be able to add custom functionality to generated XMLBeans interfaces/classes. To accomplish this you will be able to pass the Schema Compiler two things 1) an interface that defines the set of methods to implement and 2) a static handler which implements this functionality (it is debatable whether this should be static or instance based, there are arguments both ways). The underlying XMLBeans generated classes will implement the interface and for each method call out to the static handler. Note that this capability allows XMLBeans classes to be your interface, this could allow certain binding type strategies to sit on top of XMLBeans. For example, you could imagine an SDO (Service Data Objects) implementation on top of XMLBeans such that the SDO DataObject interface could be implemented by corresponding XMLBean(s). 7/14/04 - Note this feature has been implemented see this wiki page for more information.. Compilation Performance - In XMLBeans V1 compilation, while very performant, was essentially fully batch oriented. If one minor change occurred to a single XML Schema amongst a whole set of XML Schemas that are submitted to the XMLBeans schema compiler the entire XMLBeans schema type system and generated classes were recreated. In other words XMLBeans has been unable to take advantage of compilation work that already occurred. For XMLBeans users with large numbers of XML Schemas, or IDE's integrating XMLBeans, compilation time can be problematic. In XMLBeans V2 major steps towards addressing this issue will be implemented. When doing an XMLBeans compile you should be able to pass in existing XMLBeans compiled artifacts (probably not a jar, most likely an exploded directory, or perhaps an in memory representation) and the XMLBeans compiler would do only the incremental work necessary to rebuild the type system and the java classes. In V2 this will likely show up as 1) only the java classes that change should be compiled and replaced and 2) incremental XML Schema compilation at the namespace level. Our opinion is that the doing incremental compilation at the type level is too low level and too difficult to implement (at least in V2) Improved XQuery/XPath integration - XMLBeans V1 has not implemented the execQuery() API (on XmlCursor and XmlObject) and the selectPath() API is integrated with Jaxen (XPath 1.0). The original proprietary XMLBeans that was donated to Apache (the basis for Apache XMLBeans V1) XMLBeans integrated with a proprietary BEA XQuery engine (note this implies XPath 2.0). In V2 we need to reexamine and clean up XMLBeans XQuery/XPath functionality. Since XMLBeans is store based a major advantage of XMLBeans should always be great XPath and XQuery (arguably XSLT as well ...) support. rem: Ideally we would talk BEA into open sourcing their XQuery engine <smile> JDK 1.5 Generics and Enumerations - XMLBeans V2 will work with both JDK 1.4 and JDK 1.5. V2 will provide the option to take advantage of JDK 1.5 Generics and/or Enumerations in XMLBeans generated classes. SAAJ 1.2 Support - SAAJ defines a SOAP Envelope Document structure and in the latest spec release SAAJ 1.2 it is defined to be tightly integreated with DOM. In V2 it should be possible to create a SAAJ Envelope structure such that the SAAJ Nodes are the same objects as the underlying DOM Nodes. This would save a SAAJ implementation on top of DOM not to have a parrallel Node tree and should improve performance substantially. Note the goal is not to develop a SAAJ implementation within XMLBeans but to allow for a SAAJ 1.2 implementation to be effectively implemented on XMLBeans. SAAJ is a web services related technology and could have web services container specific functionality (logging, error handling, etc.) in it's implementation. XMLBeans V2 will have a SAAJ interface and the underlying XML Store tree nodes will implement the appropriate SAAJ interfaces and call back through the XMLBeans defined SAAJ interface. The goal is that any web services container could implement SAAJ 1.2 efficiently over XMLBeans. Improved Error Handling - In XMLBeans V1 error handling is ok but needs work. Errors should have codes. Errors should point accurately to the object in Error (location information needs be accurate). Error messages should be well written and consistent. Other Items To Be Researched/Considered Schema to POJO/POJO to Schema - XMLBeans generated classes are tightly coupled to the corresponding XML Schema(s). This is not necessarily a bad thing, it is what allows XMLBeans to fully support XML Schema functionality in a seamless and elegant manner. XML Beans versioning has essentially the same issues as XML Schema versioning. Consequently best practice with XMLBeans is to architecturally isolate XMLBeans through other layers using Data Access Object type patterns (using DAO informally here). Essentially this involves mapping the data in the XMLBean to a value object or business object and the rest of the codebase writes against this mapped object. Thus if the XML Schema changes it may be possible to change redo the mapping from the XMLBean to the value/business object. Currently with XMLBeans this is an exercise left up to the developer. It may be possible for XMLBeans to provide a mapping object that could facilitate a more loosely coupled architecture like this. This could entail a mapping file (or annotations) against an existing Java class that would allow the synchronization between the XMLBean and the corresponding class. There is still a good amount of research to be done here. It is possible this is more of a best practices document outlining a Data Access Object type pattern then an XMLBeans feature. Canonicalization - This is may be more research than committed feature. One of the more expensive aspects of XML Signature is the C14N processing. This involves reading the target XML nodes from the tree and canonicalizing the XML. It may be possible to improve this performance by supporting C14N directly in the XML Store (then again maybe not ..). Hibernate integration - There have been multiple discussions around using XMLBeans with Hibernate (). Ideally this would show up as some sort of plug-in into Hibernate that supports XMLBeans persistence effectively. Eclipse and Netbeans plug in for XMLBeans - XMLBeans supported plug-ins for these popular IDEs. Weblogic Workshop already has strong support. Items we don't expect to get to XmlObject Lossy Binding - It seems feasible to generate XmlObject derived implementations of XMLBeans classes that do not require an underlying XML Store. Of course that would mean that a newCursor() call on XmlObject would get UnsupportedOperationException. The idea would be that an XMLBeans user may not initially need the functionality of an XML Store and could work completely through XmlObject (e.g., the binding layer). If at any point there is a need to have an underlying XML Store then there would be a compile option to have the XML Store available. This is primarily a performance optimization the generated java classes would still derive from XmlObject and have the same shape. A lot of research needs to be done here and it is possible this approach will not be feasible. DOM Eventing - DOM Eventing is challenging with the XML Store architecture and may not be feasible. XMLBeans will not address it in V2.
https://wiki.apache.org/xmlbeans/V2Features
CC-MAIN-2017-30
refinedweb
1,624
51.28
apiarist 0.1.0 Python Hive query framework A python 2.5+ package for defining Hive queries which can be run on AWS EMR. It is, in its current form, only addressing a very narrow use-case. Reading large CSV files into a Hive database, running a Hive query, and outputting the results to a CSV file. Future versions may extend the input/output formats. The jobs are runnable locally, which is mainly for testing. You will need a local version of Hive which is in your `PATH` such that the command `hive -f /some/hive/script.hql` causes hive to execute the contents of the file. It is heavily modeled on [mrjob]() and attempts to present a similar API and use similar common variables to cooperate with `boto`. ## A simple Hive job You will need to provide four methods: - `table` the name of the table that your query will select from. - `input_columns` the columns in the source data file. - `output_columns` the columns that your query will output. - `query` the HiveQL query. This code lives in `/examples`. ```python from apiarist.job import HiveJob class EmailRecipientsSummary(HiveJob): def table(self): return 'emails_sent' def input_columns(self): return [ ('day', 'STRING'), ('weekday', 'INT'), ('sent', 'BIGINT') ] def output_columns(self): return [ ('year', 'INT'), ('weekday', 'INT'), ('sent', 'BIGINT') ] def query(self): return "SELECT YEAR(day), weekday, SUM(sent) FROM emails_sent GROUP BY YEAR(day), weekday;" if __name__ == "__main__": ``` ### Try it out Locally (must have a Hive server available): python email_recipients_summary.py -r local /path/to/your/local/file.csv EMR: python email_recipients_summary.py -r emr s3://path/to/your/S3/files/ *NOTE: for the EMR command, you will need to supply some basic configuration.* ### Serde Hive allows custom a serde to be used to define data formats in tables. Apiarist uses [csv-serde]() to handle the CSV format properly. ## Configuration There are a range of options for providing job-specific configuration. ### Command-line options Arguments can be passed to jobs on the command line, or programmatically with an array of options. Argument handling uses the [optparse]() module. Various options can be passed to control the running of the job. In particular the AWS/EMR options. - `-r` the run mode. Either `local` or `emr` (default is `local`) - `--conf-path` use a YAML configuration file. - `--output-dir` where the results of the job will go. - `--s3-scratch-uri` the bucket in which all the temporary files can go. - `--local-scratch-dir` this is where temporary file will be written. - `--s3-log-uri` write the logs to this location on S3. - `--ec2-instance-type` the base instance type. Default is `m3.xlarge` - `--ec2-master-instance-type` if you want the master type to be different. - `--num-ec2-instances` number of instances (including the master). Default is `2`. - `--ami-version` the ami version. Default is `latest`. - `--hive-version`. Default is `latest`. - `--s3-sync-wait-time` to configure how long to wait after uploading files to S3. - `--check-emr-status-every` configure the interval between each status check on a running job. - `--quiet` less logging - `--verbose` more logging ### Configuration file You can supply arguments to your job in a configuration file. It takes the same format as `mrjob` configuration. The name of the arguments is different, using underscores instead of hyphens and omitting leading hyphens. Config options are divided by the type of runner (local/emr) to allow provision of all options for a job in one file. Below is a sample config file: ```yaml runners: emr: aws_access_key_id: AABBCCDDEEFF11223344 aws_secret_access_key: AABBCCDDEEFF1122334AABBCCDDEEFF ec2_master_instance_type: c1.medium ec2_instance_type: m3.xlarge num_ec2_instances: 5 s3_scratch_uri: s3://myjobs/scratchspace/ hive_version: 0.11.3 local: local_scratch_dir: /home/apiarist/temp/ ``` Arguments supplied on command-line or in application code will override those supplied in the config file. ### Environment variables Some environment variables are used when the value is not provided in other configuration methods. `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` for connecting to AWS. `S3_SCRATCH_URI` a S3 base location where all the temporary file for the job will be written. `APIARIST_TMP_DIR` where local files will be written during job runs. (This is overridden by the `--local-scratch-dir` option) `CSV_SERDE_JAR_S3` a permanent location of the serde jar. If this is not set, Apiarist will automatically upload a copy of the jar to an S3 location in the scratch space. ### Passing options to your jobs Jobs can be configured to accept arguments. To do this, add the following method to your job class to configutr the options: ```python def configure_options(self): super(EmailRecipientsSummary, self).configure_options() self.add_passthrough_option('--year', dest='year') ``` And then use the option by providing it in the command line arguments, like this: python email_recipients_summary.py -r local /path/to/your/local/file.csv --year 2014 Then incorporating it into your HiveQL query like this: ```python def query(self): q = "SELECT YEAR(day), weekday, SUM(sent) " q += "FROM emails_sent " q += "WHERE YEAR(day) = {0} ".format(self.options.year) q += "GROUP BY YEAR(day), weekday;" return q ``` ## License Apiarist source code is released under Apache 2 License. Check LICENSE file for more information. - Author: Max Sharples - License: Apache - Provides apiarist - Categories - Development Status :: 4 - Beta - Intended Audience :: Developers - License :: OSI Approved :: Apache Software License - Natural Language :: English - Operating System :: OS Independent - Programming Language :: Python - Programming Language :: Python :: 2.5 - Programming Language :: Python :: 2.6 - Programming Language :: Python :: 2.7 - Topic :: System :: Distributed Computing - Package Index Owner: maxsharples - DOAP record: apiarist-0.1.0.xml
https://pypi.python.org/pypi/apiarist/0.1.0
CC-MAIN-2017-04
refinedweb
891
50.94
Password Fields in UI - lachlantula Hi, I've created a program using the UI editor with a single text field being used for logging in. First the password is entered, so I set the text field to be a password field in the editor. Is there any way I can switch back to a regular text field (ie. Typing without the dots) in code? Thanks! @lachlantula , sorry I don't have an exact answer for you. I doubt you can do what you want to do though exactly using the one field, but I am not really sure. But you could have 2 fields on the view. Hide and show them depending on if the user clicked a button on the view to show or hide text for example. Set secure to False import ui v = ui.View(frame=(0,0,400,100)) t = ui.TextField(frame=v.frame) #t.secure = True t.secure = False v.add_subview(t) v.present('sheet') @abcabc , nice you are right. The label in the Designer would be better if it was 'Secure' though. I did a dir print and looked at the help, I still missed it. My brain was looking for something else. There appears to be a problem here though when using the system font. It wants to default to times font, even you set it explicitly. Something like Menlo font behaves a lot better, or as expected. import ui _font = ('Menlo', 24) #_font = ('<System>', 24) def btn_action(sender): fld = sender.superview['pwd'] fld.secure = not fld.secure fld.font = _font if fld.secure: sender.title = 'Clear Text' else: sender.title = 'Protected' v = ui.View(frame=(0,0,400,100), bg_color ='white') t = ui.TextField(name = 'pwd', frame=(0,0,v.width, 48)) t.font= _font #t.secure = True t.secure = False v.add_subview(t) btn = ui.Button(frame = (10,0, 80, 32), title = 'Protected') btn.y = t.frame.max_y + 10 btn.width += btn.width * 1 btn.border_width = .5 btn.corner_radius = 3 btn.action = btn_action v.add_subview(btn) v.present('sheet') - lachlantula @Phuket2 that was my original idea, but I'd rather a solution that's a bit cleaner, just like what @abcabc suggested - which works perfectly! Interesting find though Phuket, I didn't run into that w/ Pythonista 3 and the UI builder. @lachlantula , no problems. I was just intrested in trying it after @abcabc pointed out the secure attr. I have see things go strange before when you play with the size of the system font. But initially I was using the default size. Oh, well
https://forum.omz-software.com/topic/3408/password-fields-in-ui/7
CC-MAIN-2019-35
refinedweb
426
79.56
Python is a high-level, object-oriented programming language that has recently been picked up by a lot of students as well as professionals due to its versatility, dynamic nature, robustness, and also because it is easy to learn. Not only this, it is now the second most loved and preferred language after JavaScript and can be used in almost all technical fields. Demand for Python developers is increasing and will keep increasing in the next few years. The following concepts are essential for any developer that wants to ride that wave in the future. 1. Understanding lists and dictionaries This is a common misconception for new developers. Let’s say you create a list ‘x’ and then, assign this list to a new variable: x = [9,8,7] y = x Now, try appending a new value in the y list and then print both lists: y.append(6) print(y) # Prints [9,8,7,6] print(x) # Prints [9,8,7,6] You must be wondering why does the new value has been appended to both lists! This happens because when assigning lists in Python, unless otherwise specified, the list is not copied. Instead, a new reference to this list is created i.e; y is just a reference to the list. This means that operations in both variables will be reflected in the same list. To make a copy of the list, you need to use the .copy() method: x = [9,8,7] y = x.copy() y.append(6) print(y) # Prints [9,8,7,6] print(x) # Prints [9,8,7] 2. Context managers Context Managers are a great tool in python that help in resource management. They allow you to allocate and release resources when you want to. Context managers make sure that all aspects of a resource are handled properly. The most used and recognized example of a context manager is with the statement. with is mostly used to open and close a file. file = open(‘data.txt’,’w’) try: file.write(“Follow Me”) except: file.close() With the help of the context manager, you can do the task of opening a file in write mode and also closing it if something goes wrong in just one line precisely. The main advantage of using with is that it makes sure that our file will be closed at the end. with open (‘data.txt’,’w’) as f: f.write(“Follow Me”) Notice that we never called the f.close() method. The context manager handled it automatically for us, and it would try to do is as well, even if an exception was raised. There are many use cases that context managers can be used (i.e. aiohttp.ClientSession ) and of course, you can create your own. 3. Generators Generators are a kind of function that returns an object that can be iterated over. It contains at least a yield statement. yield is a keyword in python that is used to return a value from a function without destroying its current state or reference to a local variable. A function with a yield keyword is called a generator. A generator generates an item only once when they ask for it. They are very memory efficient and take less space in the memory. Example (Fibonacci Series Using Generators) — def fib(limit): a,b = 0,1 while a < limit: yield a a, b = b, a + b for x in fib(10): print (x) The difference between yield and return is that return terminates the function but yield only pauses the execution of the function and returns the value against it each time. 4. Type hinting Type hinting enables you to write clean and self-explanatory code. The way you apply it is by “hinting” the type of a parameter and the return value of a function. For example, we want to validate the text input of a user is always an integer. To achieve that, we write a function that returns True or False based on our validations: def validate_func(input): *...* Now that you know what this function does, it is pretty easy to understand by looking at the definition. But, it would not be that easy if you were not given the description above. What is the type of input parameter? Where does it come from? Is it already an integer? What if it is not? Does the function return anything, or just raises an exception? These questions can be answered, by refactoring the code to this: def validate_func(input: str) -> bool: *...* Now, this function is easier to be interpreted, even by someone who reads this for the first time. 5. Logging Logging is a process of capturing the flow of code as it executes. Logging helps in debugging the code easily. It is usually done in files so that we can retrieve them later. In python, we have a library logging that helps us to write logs onto a file. There are five levels of logging: - Debug: Used for diagnosing the problem with detailed information. - Info: Confirmation of success. - Warning: when an unexpected situation occurs. - Error: Due to a more serious problem than a warning. - Critical: Critical error after which the program can’t run itself. I will be writing a dedicated article on “Logging In Python” soon. Subscribe to get an email for when I will publish it. Final Thoughts Well, here are the Top 5 Python Concepts That Will Advance Your Career. The points described above, are only some of the Python insights experienced developers are keeping in mind. I hope you find this article helpful and learned some new things. Share this awesome article with your Pythoneer Friends.😀 Till then see you in my next article…
https://plainenglish.io/blog/5-python-concepts-that-will-advance-your-career?utm_campaign=Last%20Week%20in%20Plain%20English&utm_medium=email&utm_source=Revue%20newsletter
CC-MAIN-2022-40
refinedweb
952
74.29
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project. Hi, A single-threaded process cannot cancel itself using pthread_cancel() as reported in the bz. This is because header.multiple_threads is not set till a thread is cloned and cancellation point entries are guarded by this. This is a little odd in terms of a request since this can be easily done for the main process using exit and atexit. However, there is nothing in the specification of pthread_cancel that explicitly disallows this, so for the sake of compliance, pthread_cancel should work for a single-threaded process too. Attached patch unconditionally enables multiple_threads in the caller of pthread_cancel. This should have an impact only for a single-threaded process since any threads in a multi-threaded process should have that flag enabled. This was suggested in the bz by Jakub Jelinek. I have also added a test case to verify that this is fixed. I have run this on x86_64 with the test case with and without the patch to make sure that this is fixed. I did not find any regressions introduced as a result of this patch. Regards, Siddhesh nptl/ChangeLog: 2012-05-09 Siddhesh Poyarekar <siddhesh@redhat.com> Jakub Jelinek <jakub@redhat.com> [BZ #13613] * nptl/pthread_cancel.c (pthread_cancel): Enable multiple_threads before marking the thread as cancelled. * nptl/tst-cancel-self.c: New test case. * nptl/Makefile (tests): Add tst-cancel-self. diff --git a/nptl/Makefile b/nptl/Makefile index 07a1022..eb1b6ca 100644 --- a/nptl/Makefile +++ b/nptl/Makefile @@ -236,6 +236,7 @@ tests = tst-typesizes \ tst-cancel11 tst-cancel12 tst-cancel13 tst-cancel14 tst-cancel15 \ tst-cancel16 tst-cancel17 tst-cancel18 tst-cancel19 tst-cancel20 \ tst-cancel21 tst-cancel22 tst-cancel23 tst-cancel24 tst-cancel25 \ + tst-cancel-self \ tst-cleanup0 tst-cleanup1 tst-cleanup2 tst-cleanup3 tst-cleanup4 \ tst-flock1 tst-flock2 \ tst-signal1 tst-signal2 tst-signal3 tst-signal4 tst-signal5 \ diff --git a/nptl/pthread_cancel.c b/nptl/pthread_cancel.c index 249aa11..1bfca63 100644 --- a/nptl/pthread_cancel.c +++ b/nptl/pthread_cancel.c @@ -95,6 +95,14 @@ pthread_cancel (th) break; } + + /* A single-threaded process should be able to kill itself, since there is + nothing in the POSIX specification that says that it cannot. So we set + multiple_threads to true so that cancellation points get executed. */ + THREAD_SETMEM (THREAD_SELF, header.multiple_threads, 1); +#ifndef TLS_MULTIPLE_THREADS_IN_TCB + __pthread_multiple_threads = *__libc_multiple_threads_ptr = 1; +#endif } /* Mark the thread as canceled. This has to be done atomically since other bits could be modified as well. */ diff --git a/nptl/tst-cancel-self.c b/nptl/tst-cancel-self.c new file mode 100644 index 0000000..2b6baf8 --- /dev/null +++ b/nptl/tst-cancel-self.c @@ -0,0 +1,54 @@ +/* Copyright (C) 2012 Free Software Foundation, Inc. + This file is part of the GNU C Library. + Contributed by Siddhesh Poyarekar <siddhesh@redhat.com>, 2012. + + <pthread.h> +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <unistd.h> + +static void +cleanup (void *arg) +{ + printf ("Main thread got cancelled and is being cleaned up now\n"); + exit (0); +} + +static int +do_test (void) +{ + int ret = 0; + + pthread_cleanup_push (cleanup, NULL); + if ((ret = pthread_cancel (pthread_self ())) != 0) + { + printf ("cancel failed: %s\n", strerror (ret)); + exit (1); + } + + sleep (1); + + printf ("Could not cancel self.\n"); + pthread_cleanup_pop (0); + + return 1; +} + + +#define TEST_FUNCTION do_test () +#include "../test-skeleton.c"
http://sourceware.org/ml/libc-alpha/2012-05/msg00458.html
CC-MAIN-2013-20
refinedweb
551
60.61
I need help to read fastq file on my server virtual machine im using the following python code def readFastq(filename): sequences = [] qualities = [] with open(filename) as fh: while True: fh.readline() seq = fh.readline().rstrip() fh.readline() qual = fh.readline().rstrip() if len(seq) == 0: break sequences.append(seq) qualities.append(qual) return sequences, qualities seqs, quals = readFastq('sample-seqs.fastq') print (seqs) giving me the following syntax error: invalid syntax (pointing to the line before the last line) thank you If it's pointing to the line before the last line, is your input filename/path correct? You code runs for me, though the output only returns [+], [+], [+]- so some of your .readline()logic is off too.
https://www.biostars.org/p/273300/
CC-MAIN-2020-50
refinedweb
118
58.89
Define a branch class that branches so that one way variables are fixed while the other way cuts off that solution. More... #include <CbcBranchToFixLots.hpp> Define a branch class that branches so that one way variables are fixed while the other way cuts off that solution. a) On reduced cost b) When enough ==1 or <=1 rows have been satisfied (not fixed - satisfied) Definition at line 23 of file CbcBranchToFixLots FIXME: should use enum or equivalent to make these numbers clearer. Infeasibility for an integer variable - large is 0.5, but also can be infinity when known infeasible. Reimplemented from CbcBranchCut. Return true if object can take part in normal heuristics. Reimplemented from OsiObject. Definition at line 65 of file CbcBranchToFixLots.hpp. Creates a branching object. Reimplemented from CbcBranchCut. data Reduced cost tolerance i.e. dj has to be >= this before fixed Definition at line 79 of file CbcBranchToFixLots.hpp. We only need to make sure this fraction fixed. Definition at line 81 of file CbcBranchToFixLots.hpp. Never fix ones marked here. Definition at line 83 of file CbcBranchToFixLots.hpp. Matrix by row. Definition at line 85 of file CbcBranchToFixLots.hpp. Do if depth multiple of this. Definition at line 87 of file CbcBranchToFixLots.hpp. number of ==1 rows which need to be clean Definition at line 89 of file CbcBranchToFixLots.hpp. If true then always create branch. Definition at line 91 of file CbcBranchToFixLots.hpp.
https://www.coin-or.org/Doxygen/Cbc/classCbcBranchToFixLots.html
CC-MAIN-2018-05
refinedweb
236
70.39
Connecting to an SAP HANA Service database The SAP HANA Service is now available, and there have been some questions about how to connect to a HANA database in the cloud from a local SQL client. It’s a bit complicated because all connections to HANA Service must be encrypted. In this post I’ll go through some of the common scenarios. You don’t have to read it all: just find the bits you need. Note: if you are building an XSA or Cloud Foundry application that binds your application to a HANA service broker you can stop reading here. The service broker provides you with a logical connection and you don’t need to do anything else. This blog is intended only for standalone clients using SQL client/server connections. The sequence of scenarios is: - TCP/IP connections without certificate validation (for testing) - TCP/IP connections for production use - WebSocket connections, for use in organizations that block outgoing TCP/IP connections. Thanks go to Bjoern Brencher, Akshay Nayak, Tom Turchioe for contributing. For a more in-depth treatment of how to connect using the SAP Common Crypto library, I recommend Philip Mugglestone’s clear and detailed step-by-step video at the SAP HANA Academy YouTube channel. Let’s start with the identifiers you need. The SAP HANA Service Dashboard shows the basic information about ta HANA database. In particular, notice the endpoint and the ID. – The endpoint is the host and port you need for TCP/IP connections. It is usually of the form zeus.hana.prod….ondemand.com:port and here I’ll write it as zeus.hana…ondemand.com:port. – The ID of the database is a GUID, such as 45fx7b7a7-a2d9-ad49-84ab-89106146b944. In addition you need to know the user ID and password. I’ll indicate these by HANA_USER and hana_password. I’l walk through three cases, each of which have other cases included. To start with, I’ll look at TCP/IP connections that do not require specification of a local certificate (test connections and connections on Windows). Then I’ll show how you can use a WebSocket to connect. And then I’ll walk through how to manage certificates for properly secure connections. One word of warning: I’m no security expert and I sometimes get the names wrong for certificates, keys, and so on. This is a blog post to meet a short-term need, and not official documentation! In each case I’ll show an hdbsql command to start with, and then show a connection example from another interface to show how the keywords work. To use a WebSocket, you’ll need a version of the HANA Clients at least 2.3.106. It’s downloadable from and from SAP Software Downloads. First steps: TCP/IP connections without certificate validation Here is a connection from hdbsql using TCP/IP. It shows where the endpoint appears, and that you need to specify an encrypted connection. The example is split over several lines for readability. > hdbsql -n zeus.hana....ondemand.com:20058 \ -u HANA_USER \ -p hana_password \ -e \ -ssltrustcert The -n option specifies the host and port, -u and -p are for the user name and password, and the -e option specifies an encrypted connection. The -ssltrustcert option skips the validation of the host certificate and is not recommended for production use. Here we just use it for testing purposes, to verify that we have the other connection parameters specified properly. If you are using Windows, the HANA Client uses the Microsoft encryption library by default, and the Windows certificate store contains, by default, the certificate needed to verify the HANA Service key, so for Windows users you can use the following both for testing and in production: > hdbsql -n zeus.hana....ondemand.com:20058 \ -u HANA_USER \ -p hana_password \ -e For the programming interfaces, just use the keywords corresponding to each of the hdbsql options, which are all documented in the HANA Client Interface Programming Reference. Here is a node.js example: var hana = require('@sap/hana-client'); var conn = hana.createConnection(); var conn_parms_tcp_test = { serverNode : "zeus.hana....ondemand.com:port", encrypt : true, sslValidateCertificate: false, uid : "HANA_USER", pwd : "hana-password" }; conn.connect(conn_parms_tcp_test, function(err) { if (err) throw err; conn.exec("SELECT DATABASE_NAME FROM M_DATABASES", function(err, result) { if (err) throw err; console.log("Database name", result[0].DATABASE_NAME); conn.disconnect(); }) }); And here is a python example. Notice that the values for the **encrypt** and **sslValidateCertificate** keywords are strings, not boolean values. from hdbcli import dbapi conn = dbapi.connect( address="zeus.hana...dbaas.ondemand.com", port=port, user="HANA_USER", password="hana_password", encrypt='true', sslValidateCertificate='false' ) print("Connected") if conn.isconnected() else print("Not connected") conn.close() Again, if you are on Windows and the application has access to the certificate store, you can leave out the sslValidateCertificate keyword, and the client will look in the Microsoft certificate store to find the right certificate. Managing certificates SAP Cloud Platform uses Digicert certificates. If your certificate store does not have the correct certificate, you may need to download it and store it in a file. Here are two ways to get an appropriate certificate. Downloading a certificate from Digicert You can get Digicert certificates from The one to use is the DigiCert Global Root CA, Serial #: 08:3B:E0:56:90:42:46:B1:A1:75:6A:C9:59:91:C7:4A, Thumbprint: A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436. If you are using the OpenSSL encryption library, which is default on Linux and Mac OS, you need to convert this file to a “pem” format which the client can use. Copy the file DigiCertGlobalRootCA.crt to ~/.ssl and then run this command to generate a pem format file (split across multiple lines for readability): > openssl x509 -inform der -in DigiCertGlobalRootCA.crt \ -out DigiCertGlobalRootCA.pem Obtaining a certificate from the SAP Cloud Platform Cockpit Alternatively, you can get a certificate in the proper “PEM” format from the SAP Cloud Platform Cockpit. The CA Digicert certificate is stored in the Cloud Foundry service binding for usage in CF applications. You can see the certificate here: just select the text for the certificate itself (the whole string) and store it in a file: say ~/.ssl/DigiCertGlobalRootCA.pem. Newline characters can be left in or deleted; it doesn’t make any difference. Connecting, using a certificate to validate the server Here is an hdbsql connection string that uses this certificate: > hdbsql -n zeus.hana...ondemand.com:20058 \ -u HANA_USER -p hana_password \ -e -sslprovider openssl \ -ssltruststore ~/.ssl/DigiCertGlobalRootCA.pem And here are the keywords in a node.js connection method: var conn_parms_tcp = { serverNode : "zeus.hana...ondemand.com:<port>", encrypt : true, sslCryptoProvider : "openssl", sslTrustStore : "/path/to/home/.ssl/DigiCertGlobalRootCA.pem", uid : "HANA_USER", pwd : "hana_password" }; The connection parameters for other languages are similar. If you are running a node.js application on Cloud Foundry, you cannot access the file system directly and so you cannot specify a file in the the sslTrustStore keyword. Instead of the path given above, you can specify the certificate as a string, like this (the body of the certificate has been replaced by “…” here: you should include the entire certificate as a string). var conn_parms_tcp_string = { serverNode : "zeus.hana....ondemand.com:<port>", encrypt : true, sslCryptoProvider : "openssl", sslTrustStore : "-----BEGIN CERTIFICATE----- MIIDr...bd4= -----END CERTIFICATE-----", uid : "HANA_USER", pwd : "hana_password" }; WebSocket connections Some organizations block TCP/IP ports for outgoing connections. A solution in these cases is to use WebSocket connections, which run TCP/IP over an HTTP connection. To specify a WebSocket connection given the information on the HANA Service Dashboard you need to make the following changes to the TCP/IP examples above: - In the hostname or address parameter, replace *zeus* at the beginning of the hostname with *wsproxy*. - Replace the TCP/IP port with port 80. - Add the WebSocketURL connection parameter, providing the address /service/service-id, where service-id is the ID value shown in the HANA Service Dashboard. You may also need to provide parameters to specify a proxy server, as environments that limit outgoing ports usually require proxy server information. Here I’ll assume the proxy host is just “proxy” and that the port is the default 8080. Here is an hdbsql connection string using a WebSocket: > hdbsql -n wsproxy.hana...ondemand.com:80 \ -wsurl /service/<service-id> \ -u HANA_USER -p hana-password \ -e -sslprovider openssl \ -ssltruststore ~/.ssl/DigiCertGlobalRootCA.pem \ -proxyhost proxy -proxyport 8080 The proxyhost and port values can be left off if you are not behind a firewall. Also, if you are on Windows, the SSLTrustStore and SSLProvider may not be needed. As before, you can look up the connection parameters for your programming language in the HANA Client Interfaces Programming Reference. Here is a Python example (note the proxy_port is a string, not an integer): conn = dbapi.connect ( address="wsproxy.hana...ondemand.com", port=80, user="HANA_USER", password="hana-password", websocketurl='/service/<service-id>', encrypt='true', sslCryptoProvider='openssl', sslTrustStore='/path/to/home/.ssl/DigiCertGlobalRootCA.pem', proxy_host='proxy', proxy_port='8080' ) And here is a node.js example: var conn_parms_ws = { serverNode : "wsproxy.hana...ondemand.com:80", encrypt : true, proxy_host : "proxy", proxy_port : 8080, webSocketURL : "/service/<service-id>", sslCryptoProvider : "openssl", sslTrustStore : "/path/to/home/.ssl/DigiCertGlobalRootCA.pem", uid : "HANA_USER", pwd : "hana-password" }; For node.js applications in Cloud Foundry, which do not have access to the file system, you can supply the SSL Trust store as a string instead of a path. Here it is with the middle of the string cut out: var conn_parms_ws = { serverNode : "wsproxy.hana...ondemand.com:80", encrypt : true, proxy_host : "proxy", proxy_port : 8080, webSocketURL : "/service/<service-id>", sslCryptoProvider : "openssl", sslTrustStore : "-----BEGIN CERTIFICATE----- MIID...Ths3p= -----END CERTIFICATE-----", uid : "HANA_USER", pwd : "hana-password" }; This set of examples does not yet cover the use of SAP Common Crypto Library, but I hope it provides enough to get you through most cases. Tom, This is great information and will save folks a lot of time in taking advantage of HANA as a Service (HaaS or DBaaS). I've spent time using HaaS with ArcGIS and we know it is in their roadmap to support HaaS... HANA only when you need it and fast to get started...this will open up another avenue for folks looking to exploit HANA specific advantages with ArcGIS. Great blog ! Very well documented !! NOTE: a newer version(XS_PYTHON00_1-70003433.ZIP) has updated hdbcli libraries that provide for encrypted connections. If you are using the SAP provided python libraries(XS_PYTHON00_0-70003433.ZIP) found by searching the software center. The enclosed hdbcli-2.3.14 whl file doesn’t provide for encrypted connections(so connections to HaaS instances will fail). You can correct this by finding a more up to date hdbcli in the HANA Client libraries found here. Click “Maintenance Software Component” -> Pick “LINUX ON X86_64 64BIT” in the architecture pull-down and select the highest “Patch Level” SAR file. Note: Pick Linux on X86 even if you’re on a windows machine or mac. You’re looking for the library that will be packaged up in your application and sent to the CloudFoundry system for execution there, not locally. UnSAR and find the hdbcli-2.3.112 tar.gz file (as of this writing). Replace the 2.3.14 whl package file with the hdbcli-2.3.112 tar.gz file where you unzipped the XS_PYTHON…ZIP file. Your python’s code should now look like this. Hope this helps somebody out there. -Andrew This really helped Andrew. Thanks a lot!! Hi, I had a question. Is the IP address of a HaaS instance be mapped to a security group in the space? How is the HANA IP whitelisted from within the application container running in the space? Best regards, Prem Hi Prem, and sorry for being slow. Currently Security Groups don't affect HaaS instances. The only whitelisting I'm aware of is apparently a choice you can make as you set up a HANA Service instance. On the HANA Service dashboard you will see a whitelist of IP addresses. I believe that right now you need to open a ticket to get that changed. How to consume the HaaS in CAP via WebIDE, usually we specify the database_id as parameter under hdi_container resource, but this seems to not work. Can you please help? Sorry Pavan, I don't have an answer to that question. CAP runs in Cloud Foundry and uses its own connection methods rather than straight SQL connection parameters. Thanks for the prompt reply, Tom. I actually got to know how to consume - After creating HaaS container in cf-cockpit, we have to open the dashboard and here we can get the GUID of our DBaaS, now we need to add this as key value pair :- database_id: "<GUID>"
https://blogs.sap.com/2018/08/06/connecting-to-an-sap-hana-service-database/
CC-MAIN-2022-21
refinedweb
2,116
57.06
12422/what-is-approve-and-allowance-method contract Token { uint256 public totalSupply;); } approve(): This method is used by one user to give permission to use some of its tokens by another user. Suppose there are two users A and B. A has 100 tokens with it and wants to let B use 50 tokens from its balance(balance of A:100). So A uses the approve method to approve B to use 50 tokens from it. When A calls approve(address(B), 50) , B is allowed to use 50 tokens from 100 tokens of user A. allowance(): This method is used by an user to check how many tokens some other user has allowed it to use. B calls allowance(address(A), address(B)) to know how many tokens has A allowed B to use (from A’s tokens). In this case, the result will be 50. If() and require() have separate functions and ...READ MORE Some of the use-cases are: Healthcare Medical records are ...READ MORE We need different types of nodes to ...READ MORE You can find the details you want ...READ MORE Summary: Both should provide similar reliability of ...READ MORE This will solve your problem import org.apache.commons.codec.binary.Hex; Transaction txn ...READ MORE To read and add data you can ...READ MORE Yes, there will be limits.. But the ...READ MORE TransientMap: TransientMap contains data that might be used to ...READ MORE OR
https://www.edureka.co/community/12422/what-is-approve-and-allowance-method
CC-MAIN-2019-30
refinedweb
241
76.62
Amazon CloudFront can serve both compressed and uncompressed files from an origin server. CloudFront relies on the origin server either to compress the files or to have compressed and uncompressed versions of files available; CloudFront does not perform the compression on behalf of the origin server. With some qualifications, CloudFront can also serve compressed content from Amazon S3. For more information, see Choosing the File Types to Compress. Serving compressed content makes downloads faster because the files are smaller—in some cases, less than half the size of the original. Especially for JavaScript and CSS files, faster downloads translates into faster rendering of web pages for your users. In addition, because the cost of CloudFront data transfer is based on the total amount of data served, serving compressed files is less expensive than serving uncompressed files. CloudFront can only serve compressed data if the viewer (for example, a web browser or media player) requests compressed content by including Accept-Encoding: gzip in the request header. The content must be compressed using gzip; other compression algorithms are not supported. If the request header includes additional content encodings, for example, deflate or sdch, CloudFront removes them before forwarding the request to the origin server. If gzip is missing from the Accept-Encoding field, CloudFront serves only the uncompressed version of the file. For more information about the Accept-Encoding request-header field, see "Section 14.3 Accept Encoding" in Hypertext Transfer Protocol -- HTTP/1.1 at. For more information, see the following topics: Topics Here's how CloudFront commonly serves compressed content from a custom origin to a web application: You configure your web server to compress selected file types. For more information, see Choosing the File Types to Compress. You create a CloudFront distribution. You program your web application to access files using CloudFront URLs. A user accesses your application in a web browser. CloudFront directs web requests to the edge location that has the lowest latency for the user, which may or may not be the geographically closest edge location. At the edge location, CloudFront checks the cache for the object referenced in each request. If the browser included Accept-Encoding: gzip in the request header, CloudFront checks for a compressed version of the file. If not, CloudFront checks for an uncompressed version. If the file is in the cache, CloudFront returns the file to the web browser. If the file is not in the cache: CloudFront forwards the request to the origin server. If the request is for a type of file that you want to serve compressed (see Step 1), the web server compresses the file. The web server returns the file (compressed or uncompressed, as applicable) to CloudFront. CloudFront adds the file to the cache and serves the file to the user's browser. By default, IIS does not serve compressed content for requests that come through proxy servers such as CloudFront. If you're using IIS and if you configured IIS to compress content by using the httpCompression element, change the values of the noCompressionForHttp10 and noCompressionForProxies attributes to false. In addition, if you have compressed objects that are requested less frequently than every few seconds, you may have to change the values of frequentHitThreshold and frequentHitTimePeriod. For more information, refer to the IIS documentation on the Microsoft website. Some versions of NGINX require that you customize NGINX settings when you're using CloudFront to serve compressed files. In the documentation for your version of NGINX, see the documentation for the HttpGzipModule for more information about the following settings: gzip_http_version: CloudFront sends requests in HTTP 1.0 format. In some versions of NGINX, the default value for the gzip_http_version setting is 1.1. If your version of NGINX includes this setting, change the value to 1.0. gzip_proxied: When CloudFront forwards a request to the origin server, it includes a Via header. This causes NGINX to interpret the request as proxied and, by default, NGINX disables compression for proxied requests. If your version of NGINX includes the gzip_proxied setting, change the value to any. If you want to serve compressed files from Amazon S3: Create two versions of each file, one compressed and one uncompressed. To ensure that the compressed and uncompressed versions of a file don't overwrite one another in the CloudFront cache, give each file a unique name, for example, welcome.js and welcome.js.gz. Open the Amazon S3 console at. Upload both versions to Amazon S3. Add a Content-Encoding header field for each compressed file and set the field value to gzip. For an example of how to add a Content-Encoding header field using the AWS SDK for PHP, see Upload an Object Using the AWS SDK for PHP in the Amazon Simple Storage Service Developer Guide. Some third-party tools are also able to add this field. To add a Content-Encoding header field and set the field value using the Amazon S3 console, perform the following procedure: In the Amazon S3 console, in the Buckets pane, click the name of the bucket that contains the compressed files. At the top of the Objects and Folders pane, click Actions and, in the Actions list, click Properties. In the Properties pane, click the Metadata tab. In the Objects and Folders pane, click the name of a file for which you want to add a Content-Encoding header field. On the Metadata tab, click Add More Metadata. In the Key list, click Content-Encoding. In the Value field, enter gzip. Click Save. Repeat Step 4d through 4h for the remaining compressed files. When generating HTML that links to content in CloudFront (for example, using php, asp, or jsp), evaluate whether the request from the viewer includes Accept-Encoding: gzip in the request header. If so, rewrite the corresponding link to point to the compressed object name. Some types of files compress well, for example, HTML, CSS, and JavaScript files. Some types of files may compress a few percent, but not enough to justify the additional processor cycles required for your web server to compress the content, and some types of files even get larger when they're compressed. File types that generally don't compress well include graphic files that are already compressed (.jpg, .gif), video formats, and audio formats. We recommend that you test compression for the file types in your distribution to ensure that there is sufficient benefit to compression.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
CC-MAIN-2013-20
refinedweb
1,072
62.58
Introduction: Orange PI HowTo: Compile Sunxi Tool for Windows Under Windows PREQUISITES: You will need - A (desktop) computer running Windows. - An Internet connection. - An Orange PI board. The last is optional, but I am sure, that You already have it. Otherwise You won't read this instructable. When You buy the Orange PI single board computer it stays just a piece of dead metal until configured properly. And its main configuration file: "script.bin" is the first key to bring it alive. This file is located in the boot partition of Your bootable SD card. And luckily to us, in most of the Linux distributions from the Official site () this partition is FAT32 and can be easily seen by any Windows computer. It does really simplify things, since there is still no reliable way to write into the Linux ext2 partitions from under the Windows. Unlucky to us the script.bin configuration file has binary format completely unfriendly for human editing. One needs some kind of software tool in order to decrypt it and crypt back after the necessary modifications have been made. And such a toolset does exist. It is infamous SUNXI-TOOLS. The fly in the ointment is that it is intended to run under Linux and we either have to keep a dedicated Linux-machine for only to use the sunxi-tools, or to find a way how to compile them for windows. I could simply compile it and share the executable, but one never knows whether they would like to make a fresh release and You will need a new compilation ASAP. So I decided to make a guide how to compile the essential tool from the sources. Lets get started. Step 1: Download Sunxi-tools Get the latest (or necessary) version of the sunxi-tools sourcecode. Go to the URL: and choose to download as zip archive. Step 2: Unzip the Sourcecode Once the download has finished, unzip the sourcecode to the folder of Your choice. (further I will assume that this folder is c:\sunxitools\, so replace this path by the path of your own). Step 3: Download Code::blocks If You have an installed copy of some operational c++ compiler for windows. and if You know how to use it, You may directly proceed to step 3. Others should get a proper c++ compiler and a shell (IDE) to use it comfortably. The choice of mine is code::blocks for Windows along with preinstalled MinGW toolchain. You may get it from here: Download and install it. Step 4: Test Your IDE To test if things go ok, start codeblocks, click "create a new project", choose "console application", choose either c or c++, type the title of the checkout project, keep the defaults untouched in the next window and click "finish". Step 5: Complete Test Then click a green triangle on the top panel of the IDE or use Build->Run menu point. If things went right You should see a message from Your autogenerated "Hello world" application in the black "DOS" window. If not, it means that the IDE and the compiler aren't working properly and You will have to investigate how to set it right. Probably You will have to download another version of the programming tools or to check their permissions in Your firewall/antivirus software. Step 6: Create New Project Now You should have an operational C/C++ programmer's toolkit and the unpacked sunxi-tools sourcecodes in c:\sunxitools\ folder on Your computer. Its time to assemble a project. Create new project in Your IDE. Choose the plain C (not c++) project of the "console application" type. Make sure that You are creating project in the c:\sunxitools\ folder and not in some other place. (E.G. codeblocks tend to make a subfolder with the same name as the project has. So if You have named Your project, say "test", and try to place it in c:\sunxitools\, You may end up with the project gone to c:\sunxitools\test\ if You are not attentive enough.) Sunxi-tools contain several utilities, but for our purpose we will need only one: the so called "fexc" utility. Step 7: Add Files to Project Exactly "fexc" utility is responsible for conversion of the script.bin into text format and for the back conversion into binary. Its essential that the executable of this utility has the name "fexc.exe", so its good if You've named Your project as "fexc". However You can use any other name of the project, since You can always rename the executable after the compilation, or either You can choose "Project->Properties" from the top pulldown menu and in the appearing window click "Build targets" tab, and edit the "Output filename" field there to override the executable name. To Your autogenerated project You should add only five source files: - fexc.c - script.c - script_bin.c - script_fex.c - script_uboot.c and seven header files: - list.h (move it form c:\sunxitools\include\ folder to c:\sunxitools\ folder) - fexc.h - script.h - script_bin.h - script_fex.h - script_uboot.h - version.h Be sure to exclude the autogenerated main.c from the project, because fexc.c already has the "int main" function in it. (Remember that any program should have only one main function?). All the necessary source code files are already in the subfolder, where You have unpacked the sourcecodes to. The header files deserve a pair of words, where to get them. "list.h" - is usually in the "include" subfolder of the unpacked sourcecodes set. "version.h" - just create it Yourself. Put there a string like: #define VERSION "Win32" Then save and close the file. (You may decorate it with #define's and #ifdef's if You want.) If You now try to compile the project it will complain about lots of errors and one missing file. The errors are mostly due to a bit of excessive style freedom, the sunxi-tools programmers used to apply, and the missing file is the dependency not included into the pack of the source code. Lets deal with this step by step. Step 8: Have Gcc Follow the 1999 ISO C Language Standard In order for the compiler not complain the too free programming style set the "с99" standart of the compilation. In codeblocks go to the "Project -> Build Options" menu and in the "Compiler Settings -> Compiler Flags" check the "Have gcc follow the 1999 ISO C language standard" checkbox. Or You can just add "-std=c99" into Your compiler options string. Now if You try to compile the project those tons of errors should begone and You are one to one with the missing dependency. Step 9: Find the Missing Dependency The missing dependency is "mman.h" file - the header of some kind of linux memory manager. Windows C natively has no such file, but fortunately there is a windows port of it. Go to for windows. Download the snapshot of git repository. Step 10: Unpack the Mman Unpack the mman.c and mman.h files, place them into the c:\sunxitools\ folder. Step 11: And Add Them to the Project Step 12: Correct Path And in file "fex.c" raplece line: #include <sys/mman.h> to the #include "mman.h" At this step Your compiler should not complain anything and You will get the long wait fexc.exe as the output. Dont be happy too early. The utility is still not fully functional. You may ensure this by decrypting some valid script.bin file into the text form - script.fex file with consequent encrypting the script.fex file back into the script.bin. You may note that the size of the resulting script.bin differs slightly from the size of the original script.bin. And if You try to decrypt the resulting once again it will fail. Neither the Orange PI will work with this script.bin. To get the functional utility we have to discharge a code bomb, that someone has put into the sunxi-tools sourcecode. It will be our next step. Step 13: Exorcism In order to discharge the code bomb open the fexc.c code file and find there a text string of the next content: else if ((out = open(filename, O_WRONLY|O_CREAT|O_TRUNC, 0666)) < 0) { Just replace it with the next string: else if ((out = open(filename, O_WRONLY|O_CREAT|O_TRUNC|O_BINARY, 512))<0){ If not the evil digits "666" in the first string I'd think that the coder has just forgot to use the O_BINARY flag. But the Number of The Beast does clarify his intentions transparently. Go figure, how ingenious it is: due to the subtle difference in how the files are processed in Windows and Linux the bomb has no effect when the utility is compiled and used under the Linux. But it ruins everything when the utility is used under Windows. After the bomb has been disarmed, You can finally compile and safely use the fexc utility on Your Windows desktop computer. Step 14: NOTES 1) To use the fexc utility comfortably, You should get two batch files: bin2fex.bat - and - fex2bin.bat. You can get them from some faily fexc.exe build for Windows out there, or either You can type them Yourselves: - bin2fex.bat should contain "fexc -I bin -O fex script.bin script.fex" - fex2bin.bat should contain "fexc -O bin -I fex script.fex script.bin" 2) If it is difficult to find the mman manager for Windows one can avoid its usage at all. However it takes much more editing of the fexc.c file and requres at least some knowledge of c. For Your convinience I share the edited sourcecode of the fexc from the sunxi-tools v1.4 free from the dependency to mman.h along with codeblocks project file and with sample script.bin from some orange pi. You can download fexc_nomman.zip 3) It is possible that in consequent versions of sunxi-tools they will add some more dependencies. Feel free to find them over the internet and add them to Your compilation project. 5) Finally here is the precompiled version of fexc.exe for Win32: fexc_nomman.zip If You are lazy enough feel free to use ver. However beware that it won't be updatet if/when the newer versions of SunxiTools/Windows will be available. So its better to learn how to compile them than to depend on some fixed binary build, I presume. 4) The "Orange PI", "Code::Blocks", "Windows", "Linux", "Sunxi-Tools", "Allwinner", etc... are the correspondent trademarks of their respective owners. 5) If You compiler complains about not finding mman functions, like: undefined reference to '_imp__mmap' be aware that define lovers of the mman development community have forgotten that the code can be compiled not only as dll library. It can also be a static library or a standalone code like we have here. To fix the problem edit "mman.h" file as follows: a) find the strings: #if defined(MMAN_LIBRARY) #define MMANSHARED_EXPORT __declspec(dllexport) #else #define MMANSHARED_EXPORT __declspec(dllimport) #endif b) add the string #define MMANSHARED_EXPORT just below the strings found at the previous step Recommendations We have a be nice policy. Please be positive and constructive. 2 Comments Great first instructable, thanks for sharing! Thanks, Glad You like it. I gonna make a series of them from the very beginning (this one) to 3D programming for OPI (will be the last one).
http://www.instructables.com/id/Orange-PI-HowTo-Compile-Sunxi-Tool-for-Windows-Und/
CC-MAIN-2018-09
refinedweb
1,907
74.19
CU is STM32F767ZI, The major architecture is shown in the following diagram: It is based on the high-performance Arm® Cortex®-M7 32-bit RISC core operating at up to 216 MHz frequency - Datasheet - The Cortex®-M7 core features a floating point unit (FPU) which supports Arm® double-precision - All the devices offer three 12-bit ADCs (3×12-bit, 2.4 MSPS ADC: up to 24 channels ), two DACs (2×12-bit D/A converters), a low-power RTC, twelve general-purpose 16-bit timers including two PWM timers for motor control, two general-purpose 32-bit timers, a true random number generator (RNG). - Chrom-ART Accelerator™ (DMA2D), graphical hardware accelerator enabling enhanced graphical user interface - Hardware JPEG codec - LCD-TFT controller supporting up to XGA resolution - MIPI® DSI host controller supporting up to 720p 30 Hz resolution ( 8- to 14-bit camera interface up to 54 Mbyte/s) - USB 2.0 high-speed/full-speed device/host/OTG controller with dedicated DMA, on-chip full-speed PHY and ULPI - 10/100 Ethernet MAC with dedicated DMA: supports IEEE 1588v2 hardware, MII/RMII Task 1.1 ARM MBED OS Quick Start One of the features of theSTM NUCLEO-F767ZI is the ARM MBED OS support. -. - A broad range of connectivity options are available in Mbed OS, supported with software libraries, development hardware, tutorials and examples. -. - Mbed supports key MCU families including STM32, Kinetis, LPC, PSoC and nRF52: device support link Please go to the ARM MBED device link for the STMF767: [link]. This page shows all the features of the NUCLEO-F767ZI and the pinout definitions. - In the right side of the page, click “add to Mbed Compiler” - You are required to create an ARM Mbed account or login to the existing account. After successful login, you will see ” NUCLEO-F767ZI has been added to your account”. Click the “Open Mbed Compiler” button as shown below - You will see the main page of the ARM Mbed online compiler, as shown below - You can create a new program based on the NUCLEO-F767ZI platform and select one of the Templates (shown below). - Select the first first blinky example, you will see your created project. The main.cpp file looks like this. The ARM Mbed code is very simple and highly abstracted. The following code defines the serial port pc, then simply using pc.printf to output message to the terminal. Define the led1 is also very simple (based on DigitalOut), and led1=!=led1 simply toggles the LED. - Before you compile the project, make sure you selected the right platform in the right side of the toolbar. If your platform is not NUCLEO-F767ZI, you can click the device manager and select NUCLEO-F767ZI as shown below (you also can switch to different platforms). - To compile the code, simply click the “Compile” button. - After compiling done, you will be asked to save one BIN file. In ARM Mbed, downloading the code to the board is very simple. When you connect the device to your PC, it will shown like a flash drive. You just need copy the BIN file to the flash drive, it will automatically download the executable code (BIN) to the board. Here, we simply save the generated BIN file to the NODE_F767ZI flash drive as shown below. - After the code has been downloaded, you will see the LD1 blink. You can further connect the board via the terminal. You can open Tera Term or any other terminal software, and select the right COM port and buadrate “9600” to see the message output. Task 1.2 ARM MBED Examples You can go to this page to check the ADC/DAC example of the ARM Mbed: [link]. Click “Import into Compiler”. You will see the main.cpp file like this #include "mbed.h" AnalogIn in(A0); #if !DEVICE_ANALOGOUT #error You cannot use this example as the AnalogOut is not supported on this device. #else AnalogOut out(PA_4); #endif DigitalOut led(LED1); //------------------------------------ // Hyperterminal configuration // 9600 bauds, 8-bit data, no parity //------------------------------------ int main() { printf("\nAnalog loop example\n"); printf("*** Connect A0 and PA_4 pins together ***\n"); while(1) { for (float out_value = 0.0f; out_value < 1.1f; out_value += 0.1f) { // Output value using DAC out.write(out_value); wait(0.1); // Read ADC input float in_value = in.read(); // Display difference between two values float diff = fabs(out_value - in_value); printf("(out:%.4f) - (in:%.4f) = (%.4f) ", out_value, in_value, diff); if (diff > 0.05f) { printf("FAIL\n"); } else { printf("OK\n"); printf("\033[1A"); // Moves cursor up of 1 line } led = !led; } } } The ADC input pin is “A0”, which is PA_3 pin in CN9 connector; the DAC output pin is PA_4 in CN7. The pin definition is in the this page. Using one jumper cable to connect PA_3 pin and PA_4 together, i.e., DAC output to the ADC. After you connect these two pins, you will see the following terminal output You can go to this link to see the most simple ARM Mbed RTOS example. The main.cpp file looks like this #include "mbed.h" void print_char(char c = '*') { printf("%c", c); fflush(stdout); } Thread thread; DigitalOut led1(LED1); void print_thread() { while (true) { wait(1); print_char(); } } int main() { printf("\n\n*** RTOS basic example ***\n"); thread.start(print_thread); while (true) { led1 = !led1; wait(0.5); } } You can simply use “thread.start(print_thread)” to create a new RTOS thread. You can check other sample codes in the main page of the board: [link] Task 1.3 ARM MbedStudio (optional) ARM also developed one local version of the IDE: ARM MbedStudio. The Mbed Studio supports debug features. However, it is still in beta version. The supported board is very limited. After you install the ARM MbedStudio (Windows, Linux, Mac version), the main page of the ARM MbedStudio is like this. You can click File->New Program, select one of the template (e.g., mbed-os-example-blinky) The main code looks like this In order to make the ARM MbedStudio to recognize our NUCLEO-F767ZI board, we need to upgrade the ST-LINK firmware. You can download the fireware upgrade software from here: [link]. After you installed the software, you will see this page You can click “Device Connect” to identify the device, and click “Yes” to upgrade the firmware. After you upgraded the firmware, the ARM MbedStudio will recognize the NUCLEO-F767ZI board in the Target part automatically. After the board has been recognized, you will have a new “Run” button after the Build button (as shown below). You can click the “Run” to download the code. The ARM MbedStudio provides the Debug features, but it requires the pyOCD support [link]. Currently, the debug feature only support the following boards [link]. Our NUCLEO board is not in the list yet. Task 2 STM32Cube STM32CubeM7 [link] also includes STM32CubeMX, a graphical software configuration tool that allows the generation of C initialization code using graphical wizards. Task 2.1 STM32CubeF7 Examples The STM32CubeM7 components are shown in the following figure. STM32CubeF7 gathers, in a single package, all the generic embedded software components required to develop an application on STM32F7 microcontrollers. In line with the STMCube™ initiative, this set of components is highly portable, not only within the STM32F7 Series but also to other STM32 series.STM32CubeF7GettingStarted The package structure of the STM32CubeF7 is shown in After you downloaded the STM32CubeF7 and extract the package, the projects related to F767 is shown under Projects->STM32F767ZI-Nucleo folder. The firmware architecture (divided into three levels) is shown in the following figure. - The folders inside the STM32CubeF7 called Examples, Examples_LL, and Examples_MIX are in level 0. These examples use respectively HAL drivers, LL drivers and a mix of HAL and LL drivers without any middleware component. - The Applications folder is in level 1, they provide typical use cases of each middleware component. - The Demonstration folder is in level 2, they implement all the HAL, BSP and middleware components. Open the IDE for STM32, for example, System Workbench for STM32 (). It has Mac, Linux and Windows version. The download link of the Windows version is [here]. If you do not have Java installed, you should download Java from [here]. The Oracle Java 11/12 installers do not register Java as the default JRE on the system path, you should setup the Java path in Windows environment. We can import any example projects inside the STM32CubeF7 folder, for example, the GPIO example The main code looks like the following figure. The drivers are defined as the HAL lay. All APIs start with HAL_, for example, HAL_GPIO_TogglePin. You can import other Examples in the STM32CubeF7 to check the sample code of different peripherals. Task 2. - STM32CubeMX main link. Download the STM32CubeMX software - STM32CubeMX is available as standalone software running on Windows®, Linux® and macOS® (macOS® is a trademark of Apple Inc. registered in the U.S. and other countries.) operating systems, or through Eclipse plug-in. Inside the download folder, you will see the Mac, Linux and Windows version - Install the STM32CubeMX software - After the STM32CubeMX has been installed, you can create a new project by either access to MCU selector or Board selector. - Click “Access to Board Selector”, and select the NUCLEO-F767ZI board from the list - Click start project, click YES for the popup window (initialize in default mode) - After you opened the project, you can configure the pinout, clock, and manage the project in the following GUI - You can click any pins in the chip diagram and select the function of each pin - In addition to select different pin functions and modules, you also can select middleware componets in the left side. For example, you can select FREERTOS in the middleware part - We can also configure the timebase source for the FreeRTOS. We can select SYS in System Core, and configure the Timebase Source as TIM2. - After the pin and module configuration finished, we can configure the Project Manager. Name the project and location, select the Toolchain as “SW4STM32”. Click save the project, it will ask to download the required fireware (1.24GB). - In the Code Generator page (left side bar), we can select “Generate peripheral initialization as a pair of .c/.h files” - After all the configuration is done, we can click “Generate the code” button to generate the System Workbench project. Open System Workbench for STM32, we can import our generated project. The source file architecture as shown below - Open the freertos.c, add the following code (create a ToggleLedThread) after the default thread definition /* USER CODE BEGIN RTOS_THREADS */ /* add threads, ... */ osThreadDef(Thread, ToggleLedThread, osPriorityBelowNormal, 0, configMINIMAL_STACK_SIZE); osThreadCreate(osThread(Thread), NULL); /* USER CODE END RTOS_THREADS */ Question: please add the missing ToggleLedThread function, and toggle the LED2 or LED3 in every 1 or 2 second. Task 2.3 STMCubeIDE (optional) STMicroelectronics’ STM32CubeIDE is a free, all-in-one STM32 development tool offered as part of the STM32Cube software ecosystem. - Latest version: 1.0 - STM32CubeMX tool for configuring the microcontroller and managing the project build. - Based on ECLIPSE™/CDT, with support of ECLIPSE™ add-ons, GNU C/C++ for Arm® toolchain and GDB debugger. - Support of ST-LINK (STMicroelectronics) and J-Link (SEGGER) debug probes - Import project from Atollic® TrueSTUDIO® and AC6 System Workbench for STM32 - Multi-OS support: Windows®, Linux®, and macOS® - Additional advanced debug features including: CPU core, IP register, and memory views; Live variable watch view; System analysis and real-time tracing (SWV); CPU fault analysis tool Task 3 STM SensorTile The STEVAL-STLKT01V1 (SensorTile development kit) is a comprehensive development kit designed to support and expand the capabilities of the SensorTile and comes with a set of cradle boards enabling hardware scalability. - STLKT01V1: [Link] - The SensorTile is a tiny, square-shaped IoT module that packs powerful processing capabilities leveraging an 80 MHz STM32L476JGY microcontroller and Bluetooth low energy connectivity based on BlueNRG-MS network processor as well as a wide spectrum of motion and environmental MEMS sensors, including a digital microphone. - To upload new firmware onto the SensorTile, an external SWD debugger (not included in the kit) is needed. It is recommended to use ST-LINK/V2-1 found on any STM32 Nucleo-64 development board. - In this lab, we will use our Nucleo-F767ZI board as the external SWD debugger for the SensorTile. There are three PCB boards inside the development kit - SensorTile module (STEVAL-STLCS01V1) with STM32L476JG MCU and other sensors - LSM6DSM: The LSM6DSM is a system-in-package featuring a 3D digital accelerometer and a 3D digital gyroscope; SPI & I2C serial interface with main processor data synchronization - LSM303AGR: The LSM303AGR is an ultra-low-power high-performance system-in-package featuring a 3D digital linear acceleration sensor and a 3D digital magnetic sensor; SPI / I2C serial interfaces - LPS22HB: The. - MP34DT05-A: The MP34DT05-A is an ultra-compact, low-power, omnidirectional, digital MEMS microphone built with a capacitive sensing element and an IC interface; PDM output - BlueNRG-MS: The BlueNRG-MS is a very low power Bluetooth low energy (BLE) single-mode network processor, compliant with Bluetooth specification v4.2; The Bluetooth Low Energy stack runs on the embedded ARM Cortex-M0 core. The stack is stored on the on-chip non-volatile Flash memory and can be easily upgraded via SPI. - BALF-NRG-02D3: This device is an ultra-miniature balun which integrates matching network and harmonics filter - LD39115J18R: 150 mA low quiescent current low noise voltage regulator; Input voltage from 1.5 to 5.5 V - The functional block diagram is shown below The hardware core system is shown below - SensorTile expansion Cradle board (we will use this one in this lab) as shown below - Equipped with 16bit stereo audio DAC (TI PCM1774) - USB port, STM32 Nucleo, Arduino UNO R3 and SWD connector - with SensorTile plug connector - ST2378ETTR – 8-bit dual supply 1.71 to 5.5 V level translator - Sensortile Cradle board with SensorTile footprint (solderable) (not use it in this lab) as shown below There are four major software libraries and tools for the STM SensorTile - STSW-STLKT01: SensorTile firmware package that supports sensors raw data streaming via USB, data logging on SDCard, audio acquisition and audio streaming - FP-SNS-ALLMEMS1 - STBLESensor: iOS and Android demo Apps - BlueST-SDK: BlueST-SDK is a multi-platform library (Android/iOS/Python) that enables easy access to the data exported by a Bluetooth Low Energy (BLE) device implementing the BlueST protocol Task 3.1 STSW-STLKT01 DataLog The STSW-STLKT01 firmware package for SensorTile provides sample projects for the development of custom applications [link] - Built on STM32Cube software technology, it includes all the low level drivers to manage the on-board devices and system-level interfaces. - The package comes with the DataLog_Audio, DataLog, AudioLoop and BLE_SampleApp applications. - The DataLog_Audio application allows the user to save the audio captured by the on-board microphone on SD card as a common .wav file. - The DataLog application features raw sensor data streaming via USB (Virtual COM Port class) and sensor data storage on an SD card exploiting RTOS features - The AudioLoop application sends audio signals acquired by the microphone via I²S and USB interfaces, allowing the user to play the sound on loudspeakers/headphones or record it on an host PC - The BLE_SampleApp provides an example of Bluetooth Low Energy configuration that enables SensorTile to stream environmental sensor data; it is compatible with the STBLESensor app available for Android and iOS Let’s download the software from this [link], you will get the following package. The organization is similar to STM32Cube. To program the SensorTile board, we first plug the SensorTile module to the SensorTile expansion Cradle board as shown below To enable the SWD debug feature, we need to use one external ST-LINK debugger (here we just use our NUCLEO-F767ZI board ) - Remove the ST-LINK jumpers (two jumpers) in the NUCLEO-F767ZI board. This step will disconnect the ST-LINK part to the STM32F767 target MCU. We will use the ST-LINK part to connect the external MCU, i.e., SensorTile. Do not lost the two jumpers. - Connect the ST-LINK port in the NUCLEO-F767ZI board to the SWD connector on the SensorTile cradle extension board. A 5-pin flat cable is provided in the SensorTile Kit package. The pin1 should be aligned together. - The following figure shows the connection result - Plug the two USB ports to your computer (one for ST-Link in NUCLEO-F767ZI, another is for the SensorTile) - Using ST Link utility software (en.stsw-link004) to verify the connected target MCU is the SensorTile (STM32L4) not the STM32F7. (If you have any connection errors, you can lower the SWD frequency from 4MHz to other frequencies). After the hardware setup is ready, we can open System Workbench to import the sample code in STSW-STLKT01. We first import the DataLog sample code first. You can build and run the code. After you run the code, you will see some popup windows to show a new USB device. This is because the Datalog code setup the USB device and transfer the sensordata through the USB port. In Device Manager, you will see two COM ports: 1) COM9 is the ST link port (the USB port that connected to the NUCLEO-F767ZI board); 2) COM19 is the newly created USB serial device in the Datalog code for the SensorTile. To get the sensor data to the computer, we need to use terminal software (e.g., Tera Term) to connect to the COM10 If your code is running, you will see the sensor data is shown in the terminal. (If your terminal connection is stuck or nothing shows, you can plugin the sensortile USB port again) You can check the project code of the Datalog example In side the main.c, you will configure the USB device after the HAL_Init(); Then, two threads are created: GetData_Thread and WriteData_Thread. osKernelStart() starts the FreeRTOS kernel. Please read the code and answer the following questions: - The GetData_Thread created one semaphore in the following code. Which function will release the semaphore and let the GetData_Thread continue? readDataSem_id = osSemaphoreCreate(osSemaphore(readDataSem), 1); osSemaphoreWait(readDataSem_id, osWaitForever); - The GetData_Thread created one pool and one message in the following code. Which code is used to put the sensor data into the Pool? What’s the usage of the Message? How (which code) can the WriteData_Thread get the sensor data? sensorPool_id = osPoolCreate(osPool(sensorPool)); dataQueue_id = osMessageCreate(osMessageQ(dataqueue), NULL); - Which code is used to send the sensor data to the USB interface? - Why the humidity value is always “0”? - MX_X_CUBE_MEMS1_Init function and getSensorsData function in datalog_application.c are used to initialize the sensors and get the sensor value. Task 3.2 STSW-STLKT01 BLE Sample App Import the BLE Sample App from the STSW-STLKT01 In side the main.c, we perform the following code to initialize the BLE stack after the HAL_Init() and SystemClock_Config(). /* Initialize the BlueNRG */ Init_BlueNRG_Stack(); /* Initialize the BlueNRG Custom services */ Init_BlueNRG_Custom_Services(); /* initialize timers */ InitTimers(); StartTime = HAL_GetTick(); The BLE protocol stack is shown in the follow figure [link] The host controller interface (HCI) layer provides a standardized interface to enable communication between the host and controller. - In BlueNRG, this layer is implemented through the SPI hardware interface. - The host can send HCI commands to control the LE controller. - The HCI interface and the HCI commands are standardized by the Bluetooth core specification At the highest level of the core BLE stack, the GAP specifies device roles, modes and procedures for the discovery of devices and services, the management of connection establishment and security. - GAP handles the initiation of security features. - The BLE GAP defines four roles with specific requirements on the underlying controller: Broadcaster, Observer, Peripheral and Central. The GATT defines a framework that uses the ATT for the discovery of services, and the exchange of characteristics from one device to another. GATT specifies the structure of profiles. In BLE, all pieces of data that are being used by a profile or service are called “characteristics”. A characteristic is a set of data which includes a value and properties. - The ATT protocol allows a device to expose certain pieces of data, known as “attributes”, to another device. BLE protocol stack is used by the applications through its GAP and GATT profiles. The GAP profile is used to initialize the stack and setup the connection with other devices. The GATT profile is a way of specifying the transmission – sending and receiving – of short pieces of data known as ‘attributes’ over a Bluetooth smart link. All current Low Energy application profiles are based on GATT. The GATT profile allows the creation of profiles and services within these application profiles Here is a depiction of how the data services are setup in a typical GATT server. Inside the Init_BlueNRG_Stack() - function hci_init(HCI_Event_CB, NULL); is used to initialize the HCI. - ret = aci_gatt_init(); is used to initialize the GATT server on the slave device. Initialize all the pools and active nodes. Until this command is issued the GATT channel will not process any commands even if the connection is opened. This command has to be given before using any of the GAP features. [link] - aci_gap_init_ function is used to register the GAP service with the GATT. The device name characteristic and appearance characteristic are added by default and the handles of these characteristics are returned in the event data. The role parameter can be a bitwise OR of any of the values mentioned below. This API initializes BLE device for a particular role (peripheral, broadcaster, central device etc.). The role is passed as first parameter to this API. Two services are added inside the Init_BlueNRG_Custom_Services() - Add_HWServW2ST_Service - Use aci_gatt_add_serv to add a service on the GATT server device. Here service_uuid is the 128-bit private service UUID allocated for the service (primary service). This API returns the service handle in servHandle. - aci_gatt_add_char()is used to add the characteristics - Add Add_ConfigW2ST_Service - Use aci_gatt_add_serv to add a service on the GATT server device. - aci_gatt_add_char is used to add the characteristics “ConfigCharHandle” After initialization, the main loop of the application will blink Led when there is not a client connected. It will then handle BLE event (hci_user_evt_proc) and update the BLE advertise data and make the board connectable (setConnectable). When SendEnv=1 (periodically setup by the TIM1_CH1 timer), it will call SendEnvironmentalData() function to send environmental data. - By checking TargetBoardFeatures, then utilize BSP_ENV_SENSOR_GetValue function to read sensor value. - If the BLE connection is there, it will use Term_Update function to send data - It will call aci_gatt_update_char_value to update the BLE characteristic value. - If the BLE connection is not there, it will use STLBLE_PRINTF to print the data to the USB terminal Task 3.3 SensorTile FP-SNS-ALLMEMS1 (optional) FP. - The FP-SNS-ALLMEMS1 firmware provides a complete framework to build wearable applications. The STBLESensor application based on the BlueST-SDK protocol allows data streaming and a serial console over BLE controls the configuration parameters for the connected boards. - FP-SNS-ALLMEMS1 is the default firmware installed in the SensorTile for out of box experience. All STEVAL-STLKT01V1 is already programmed with FP-SNS-ALLMEMS1 firmware. - software creates the following first Bluetooth services: - HW characteristics related to MEMS sensor devices - SW characteristics: - quaternions generated by the MotionFX library in short precision - magnetic North direction (e-Compass) - recognized activity using the MotionAR algorithm - recognized carry position using the MotionCP algorithm - recognized gesture using the MotionGR algorithm - audio source localization using the AcousticSL algorithm - audio beam forming using the AcousticBF algorithm - voice over Bluetooth low energy using the BlueVoiceADPCM algorithm - SD data logging (audio and MEMS data) using Generic FAT File System middleware The second service exposes the Console service with: - stdin/stdout for bi-directional communication between client and server - stderr for a mono-directional channel from the STM32 Nucleo board to an Android/iOS device The full software architecture is shown in After you download the software, it contains the following folders (similar to STM32Cube) Open System workbench for STM32, import the ALLMEMS1 sensortile project The full code looks like this You can build and download the code the SensorTile. The SensorTile will be function as the same to the out of box experience. Task 3.4 SensorTile Bootloader (optional) We can use the ST-Link Utility to flash the code to the Sensortile board and make Sensortile board run the program every time power is supplied to the board. Open the ST-Link Utility software, click File->Open Files, navigate to the folder where you have the new bin file compiled from the IDE (System workbench) After you opened the bin file, you will see the following window. Change the value of the Address field to be 0x08004000 and change the value of the Size to be 0x1000. Click Target-Connect Click Target->Program to start the download window as shown below. Modify the Start address to be 0x08004000. Click Start, then close the bin file. Navigate to the Utilities folder->BootLoader->STM32L476RG, open the BootLoaderL4.bin file. Change the address field to be 0x08000000, click Target->Program. Change the start address to be 0x08000000, click start. Then, you can remove the SWD cable and the SensorTile device will start the code automatically after power on. Apart from storing code, FP-SNS-ALLMEMS1 uses the FLASH memory for Firmware-Over-The-Air updates. It is divided into the following regions (see figure below): - 1. the first region contains a custom boot loader - 2. the second region contains the FP-SNS-ALLMEMS1 firmware - 3. The third region is used for storing the FOTA before the update The FP-SNS-ALLMEMS1 cannot not be flashed at the beginning of the flash (address 0x08000000), and is therefore compiled to run from the beginning of the second flash region, at 0x08004000 - The FP-SNS-ALLMEMS1 cannot not be flashed at the beginning of the flash (address 0x08000000), and is therefore compiled to run from the beginning of the second flash region, at 0x08004000 On any board reset: - If there is a FOTA in the third Flash region, the boot loader overwrites the second Flash region (with FPSNS-ALLMEMS1 firmware) and replaces its content with the FOTA and restarts the board. - If there is no FOTA, the boot loader jumps to the FP-SNS-ALLMEMS1 firmware - To flash modified ALLMEMS1 firmware, simply flash the compiled FP-SNS-ALLMEMS1 firmware to the correct address (0x08004000).
https://kaikailiu.cmpe.sjsu.edu/embedded-system/stm-lab-1-stm32f7-and-sensortile/
CC-MAIN-2022-05
refinedweb
4,384
52.49
If you're using PokemonGo-Bot to level multiple accounts, you might have noticed that the dev branch changes quite often the structure of the configuration files. Now I've made this to update my files easily when the structure changed.. You need to have Google Dart installed. pub global activate pogogen PokemonGo-Botdirectory, you'll need to type: pogogen pogogen --help. configs/config.json.pokemon.exampleas its template. You can install the package from the command line: $ pub global activate pogogen The package has the following executables: $ pogogen Add this to your package's pubspec.yaml file: dependencies: pogogen: ^0.4.2 You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:pogogen/pogogen.
https://pub.dev/packages/pogogen
CC-MAIN-2019-26
refinedweb
143
58.48
"ASSERTION: Going to destroy a frame we didn't remove . Prepare to crash" with XUL listbox RESOLVED FIXED Status () -- critical People (Reporter: jruderman, Assigned: tnikkel) Tracking (Blocks 1 bug, 6 keywords) Points: --- Firefox Tracking Flags (blocking1.9.1 .2+, status1.9.1 .2-fixed) Details Attachments (2 attachments, 3 obsolete attachments) ###!!! ASSERTION: Going to destroy a frame we didn't remove. Prepare to crash: 'removed', file /Users/jruderman/central/layout/xul/base/src/nsListBoxBodyFrame.cpp, line 1497 Null-deref crash [@ nsStackLayout::Layout] I have another testcase that triggers this assertion followed by a call to 0xdddddddd... Flags: blocking1.9.2? Latest nightly still crashes. doesn't crash. crashes. Most likely caused by bug 432068. If the next content isn't a list item but has a frame then we end up returning something other than a list item. I haven't looked into this too deeply, but this seemed like the logical thing to fix it. This fixes the crash for me, but I still get WARNING: ENSURE_TRUE(listbox) failed: file /home/tim/ffapply/src/layout/xul/base/src/nsListBoxBodyFrame.cpp, line 779 WARNING: ENSURE_TRUE(listboxContent) failed: file /home/tim/ffapply/src/layout/xul/base/src/nsListBoxBodyFrame.cpp, line 1447 because of the appended listboxbody with no corresponding listbox, I would assume. Got a better idea of what is going on here now. Before bug 432068 landed we would create two frames for any non-listitem content inside a listbox. One would be in the nsListBoxBodyFrame's mFrames nsFrameList because it was created by nsListBoxyBodyFrame. The other was created in the normal fashion and would not be in mFrames. After bug 432068 landed we would detect if a non-listitem inside a listbox had a frame and reuse that one without it being in mFrames and this caused problems. So if the content already has a frame, and the content is not a listitem just skip over it. Does the recursion here have the potential to overflow the stack? Attachment #384201 - Attachment is obsolete: true Attachment #384277 - Flags: superreview?(bzbarsky) Attachment #384277 - Flags: review?(bzbarsky) > The other was created in the normal fashion Where, exactly? This part confuses me.... Is the check on content tag really the right one? What's the parent frame of that frame we get back from GetPrimaryFrameFor? ccing some folks who might know something about this code. I really wish we could just rip it all out already. Flags: wanted1.9.1.x? Flags: blocking1.9.0.13? (In reply to comment #5) > > The other was created in the normal fashion > > Where, exactly? This part confuses me.... nsListBoxBodyFrame does lazy construction of the frames for its listitems. In nsCSSFrameConstructor::ContentInserted/Appended/Removed we have special checks (NotifyListBoxBody in ContentInserted/Removed and MaybeGetListBoxBodyFrame in ContentAppended) for child content with tag listitem and parent content with tag listbox. If we get that combination we short circuit the usual frame construction work and just call nsListBoxBodyFrame::OnContentInserted/Removed. If we have a parent with tag listbox but the child is not of tag listitem we don't follow this path and create the frame as normal. > Is the check on content tag really the right one? GetListItemContentAt, GetListItemNextSibling, GetIndexOfItem, GetItemAtIndex, ComputeIntrinsicWidth, and ComputeTotalRowCount all do a similar thing. And nsCSSFrameConstuctor first checks the new child's content tag before calling nsListBoxBodyFrame::OnContentInserted/Removed. Hmm, your next question prompted me to try checking if existingFrame's parent isn't |this|. This works too. > What's the parent frame of that frame we get back from GetPrimaryFrameFor? The parent of the existingFrame is a box frame based on listbox content, it is an ancestor of the listboxbody frame. > I really wish we could just rip it all out already. I've had that same thought. I thought we'd already decided to rip out the listbox dynamic frame creation stuff. > for child content with tag listitem Ah, I'd forgotten this part. And the code moved on m-c, so I didn't see it. OK, yeah. > And nsCSSFrameConstuctor first checks the new child's content tag And node type, note. We need to check both here. > The parent of the existingFrame is a box frame based on listbox content OK. I think just checking the tag + namespace should be fine here. Please move that to before the existingFrame get, though, and don't check existingFrame in that conditional: we shouldn't be returning non-listitem stuff here, right? > I thought we'd already decided to rip out the listbox dynamic frame creation We had. But I haven't done it yet, and we need to fix this bug on all the branches too. :( Maybe Timothy could take an axe to it :-) Made requested changes. I also added an assertion for the parent thing. Assignee: nobody → tnikkel Attachment #384277 - Attachment is obsolete: true Attachment #384550 - Flags: superreview?(bzbarsky) Attachment #384550 - Flags: review?(bzbarsky) Attachment #384277 - Flags: superreview?(bzbarsky) Attachment #384277 - Flags: review?(bzbarsky) Attachment #384550 - Flags: superreview?(bzbarsky) Attachment #384550 - Flags: superreview+ Attachment #384550 - Flags: review?(bzbarsky) Attachment #384550 - Flags: review+ Comment on attachment 384550 [details] [diff] [review] patch Looks great. Let's get this landed on trunk ASAP (I can push tomorrow if it hasn't happened before then) and then see about branches. Timothy, if you do want to work on removing this lazy frame stuff, that would be really nice! (In reply to comment #11) > Timothy, if you do want to work on removing this lazy frame stuff, that would > be really nice! I can add it to my list after everything else. But I don't plan on looking at it any time soon, so feel free to go ahead with it. Added Jesse's testcase as a crashtest. Attachment #384550 - Attachment is obsolete: true Pushed Status: NEW → RESOLVED Closed: 10 years ago Flags: in-testsuite+ Resolution: --- → FIXED Comment on attachment 384586 [details] [diff] [review] patch with test Other than needing a merge on crashtest.list, this applies to both branches. We should land this for 1.9.0.13 and 1.9.1.1... Attachment #384586 - Flags: approval1.9.1? Attachment #384586 - Flags: approval1.9.0.13? Flags: wanted1.9.1.x? Flags: wanted1.9.1.x+ Flags: wanted1.9.0.x+ Flags: blocking1.9.1.1? Flags: blocking1.9.0.13? Flags: blocking1.9.0.13+ Flags: blocking1.9.2? Comment on attachment 384586 [details] [diff] [review] patch with test Approved for 1.9.0.13, a=dveditz for release-drivers Attachment #384586 - Flags: approval1.9.0.13? → approval1.9.0.13+ RCS file: /cvsroot/mozilla/layout/xul/base/src/crashtests/488210-1.xhtml,v done Checking in layout/xul/base/src/crashtests/488210-1.xhtml; /cvsroot/mozilla/layout/xul/base/src/crashtests/488210-1.xhtml,v <-- 488210-1.xhtml initial revision: 1.1 done Checking in layout/xul/base/src/crashtests/crashtests.list; /cvsroot/mozilla/layout/xul/base/src/crashtests/crashtests.list,v <-- crashtests.list new revision: 1.28; previous revision: 1.27 done Checking in layout/xul/base/src/nsListBoxBodyFrame.cpp; /cvsroot/mozilla/layout/xul/base/src/nsListBoxBodyFrame.cpp,v <-- nsListBoxBodyFrame.cpp new revision: 1.102; previous revision: 1.101 done Keywords: fixed1.9.0.13 blocking1.9.1: --- → .2+ Flags: blocking1.9.1.1? → blocking1.9.1.1- Comment on attachment 384586 [details] [diff] [review] patch with test a=beltzner, please land on mozilla-1.9.1 Attachment #384586 - Flags: approval1.9.1? → approval1.9.1.2+ I don't think I ever tried it in 1.9.0 or 1.9.1. I was not able to reproduce a crash on any platform with the testcase in commetn #0 using 3.5. If anyone has way to verify this for 3.5.2, it would be greatly appreciated. Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 I've tried the test case in comment 0 on all the above builds (Windows and Linux include for good measure...). I tried loading the test case 50 times on each platform. Not once did Firefox crash. Verified1.9.1 Keywords: verified1.9.1
https://bugzilla.mozilla.org/show_bug.cgi?id=488210
CC-MAIN-2019-26
refinedweb
1,398
60.92
Slow network speed between VM and external I've got controller and network node on the one phisycal machine running ubuntu 12.04 (3.8.0-36-generic). The problem is that the bandwidth from my VM network to outside network is: [ ID] Interval Transfer Bandwidth [ 6] 0.0-10.3 sec 46.9 MBytes 38.2 Mbits/sec [ 4] 0.0-12.5 sec 896 KBytes 586 Kbits/sec [ 5] local 172.100.0.20 port 5001 connected with 172.100.0.101 port 50791 I'm running neutron with VLAN networking. The speeds between VM's are ok ~450 Mb/s. From VM to qrouter is also slow: [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 218 MBytes 182 Mbits/sec [ 4] 0.0-10.1 sec 29.9 MBytes 24.9 Mbits/sec I have disabled gro for my br-ex int eth0 on my network/controller node: $ ethtool -k eth0 Offload parameters for eth0: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp-segmentation-offload: on udp-fragmentation-offload: off generic-segmentation-offload: on generic-receive-offload: off large-receive-offload: off rx-vlan-offload: on tx-vlan-offload: on ntuple-filters: off receive-hashing: off Please help me get this working at higher speeds. When i disabled rx-checksumming and tx-checksumming the transfers gow higher but not as high as i expected: [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 266 MBytes 222 Mbits/sec [ 5] 0.0-10.1 sec 27.5 MBytes 22.9 Mbits/sec Please help. I'm trying turning off and on and it won't help. Maybe i need to turn off gro at node interface or something? run tcpdump on the qr-xxxxxxxx-xx interface in the qrouter namespace and check for packets much greater than 1500 bytes, eg >1600. That would suggest offloading is happening somewhere. From IRC: found that the machine was rebooted and so GRO was enabled on ethX again. After GRO disabled again, there where no more big packets on ethX or gr-xxxxxxx-xx, but still slow. Tcpdump show retransmissions. Suggested turning off all offloading stuff on ethX - that didn't work either. I think this issue is only seen with recent Ubuntu kernels - 3.5 and 3.8. A possible solution is to use the older 3.2 kernel on the node running the L3 agent.
https://ask.openstack.org/en/question/25306/slow-network-speed-between-vm-and-external/
CC-MAIN-2021-04
refinedweb
400
69.38
Why won't JRockit find my classes By tomas.nilsson on Jan 28, 2010 This is the second post by Mattis, diving deep into JVM specifics. NoClassDefFoundErrors are a drag. The classloader mechanism in the Java specification is very powerful, but it also gives you plenty of ways to mess things up. In which jar did you put that class file, and why isn't your classloader looking in that jar? In rare cases, you might even have an application that works using Sun Java, but throws a NoClassDefFoundError with JRockit. Surely, this must be a JRockit bug? Not necessarily. There is a slight difference in how the two JVMs work that can explain this behaviour, especially if you modify your classloaders during runtime. Let's take an example: In a separate folder "foo", create a file Foo.java: public class Foo { public Foo () { System.out.println("Foo created"); } }Now, in your root folder for this experiment, create the file ClasspathTest.java: import java.io.File; import java.net.URLClassLoader; import java.net.URL; import java.lang.reflect.Method; public class ClasspathTest { private static final Class[] parameters = new Class[]{URL.class}; // Adds a URL to the classpath (by some dubious means) // method.setAccessible(true) is not the trademark of good code public static void addURL(URL u) throws Exception { Method method = URLClassLoader.class.getDeclaredMethod("addURL", parameters); method.setAccessible(true); method.invoke((URLClassLoader) ClassLoader.getSystemClassLoader(), new Object[]{u}); } public static void main(String[] arg) throws Exception{ // Add foo to the classpath, then create a Foo object addURL(new File("foo").toURL()); Foo a = new Foo(); } }This class has a method "addURL" that basically adds a URL to the classpath of the system classloader. The main method uses this method to first add the folder "foo" to the classpath and then creates a Foo object. When you compile this method, add "foo" to the classpath: > javac -classpath .;foo Test.javaBut when you run the program, don't add foo, simply run > java TestUsing Sun Java, this will work fine. In the first line of main, we add the foo-folder to the classpath. When we create our first Foo-object, we find the Foo class in the foo folder. Using JRockit however, you get: Exception in thread "Main Thread" java.lang.NoClassDefFoundError: Foo at ClasspathTest.main(ClasspathTest.java:20)To understand this behaviour, you have to first understand how Sun and JRockit runs code. Sun Java is an interpreting JVM. This means that the first time you run a method, the JVM will interpret every line step by step. Therefore, Sun will first interpret and run the first line of main, adding "foo" to the classpath, and then the second line, creating the Foo object. JRockit however uses another strategy. The first time a method is run, the entire method is compiled into machine code. To do this, all classes used in the method needs to be resolved first. Therefore, JRockit tries to find the Foo class BEFORE the "foo" folder is added to the classpath, resulting in the NoClassDefFoundError (still thrown just before trying to use the class). So, who is right? Actually, according to the Java spec, both are. Resolving the classes can be done either right before the class is used or as early as during method invocation. For most developers, this is just trivia, but from time to time we see problems with this from customers. The solution? Don't modify your classloaders in the same method as you need the change to load a class. In the example, the following change works fine in both Sun and JRockit: public static void main(String[] arg) throws Exception{ // Add foo to the classpath, then create a Foo object in another method addURL(new File("foo").toURL()); useFoo(); } public static void useFoo() { Foo a = new Foo(); }Here, using JRockit, the class is not resolved until the method useFoo is compiled, which will be AFTER "foo" is added to the classpath. /Mattis PS: Adding URLs to the system classloader during runtime might not be a good idea. But when using your own defined classloaders, modifying these during runtime could very well be according to design.
https://blogs.oracle.com/jrockit/entry/why_wont_jrockit_find_my_class
CC-MAIN-2016-30
refinedweb
694
65.42
Red Hat Bugzilla – Bug 41451 Installer cras loading anaconda python routines Last modified: 2007-04-18 12:33:20 EDT From Bugzilla Helper: User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0) Description of problem: My system is a K6-2 400; 160mb, 30gb hd, Matrox G450 16mb agp, PCDOS 7.0, OS/2, W2kPro,RHT 7.0 all running using BootMagic 1.0. During install after the initial blue screens, the following appears: Running Anaconda - please wait Could not find platform dependent Libraries <exec-prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (innermost last) File "/usr/bin/anaconda", line 42, in ? import iutil File "/usr/bin/anaconda/iutil.py", line 2 in ? import types, os, sys, isys, select, string, stat, signal Importerror: No Module named select Install ended abnormaly ... How reproducible: Always Steps to Reproduce: 1.load CD (in my case the seawolf ISO created disk) 2.press enter 3.accept us kyb and mouse Actual Results: see description Expected Results: good install? Additional info: 30 mb /boot on /dev/hda6 3gb / on /dev/hda13 250mb swap on /dev/hda14 Sounds like you could have a bad cd. Did you check the md5sums of the ISOs before you burned them? ahem.... I was in too great a hurry to get it.... I will reburn and check md5sums. Since my RH7.0 install is down, because of a new Matrox G450, Can I check md5sums from W2k? , how? (a pointer will do.) Thanks, Rich Cottle A few of our users have reported good results with md5sumer. It works on Windows 95/98/NT/2000 and can be found at: It is freeware, so give it a try and see if your md5sums are good. Md5summer reported that the md5 summs were wrong, so I reloaded last night... too tired to re-burn last night so I will try that tonight.... Thanks for the help.... I think that we can close this now as a 'due' - 'dumb user error' ;-) Thanks again, Rich p.s. neglected to state that the md5s are ok now... It's ok. We get *tons* of bugs that turn out to be bad downloads. Thanks for working with us. I get this same error using the DVD-ROM that came with "Linux for Dummies". There's no way to check the md5sums, is there? This DVD-ROM has the option to boot into Knoppix or run the FC3 install. What can I do?
https://bugzilla.redhat.com/show_bug.cgi?id=41451
CC-MAIN-2016-50
refinedweb
410
77.33
Last updated on September 30th, 2017 | Have you tried to get Ionic Twitter login working with Firebase? There was a change when Firebase updated to V3 that made all the cool sign-in with pop-up and redirection stop working with Cordova apps. It sucked! But then I realized something, yeah, it sucked for us as developers, but it pushed us to find a better way for our users, and it was to find the native plugins. Why is it better for our users? Because when you used ionic twitter login in the previous version it had a pop-up to authorize the app, that pop-up was usually a browser one, and the user had to enter their twitter credentials half of the time. And that really sucks (I’m looking at you Instagram!) That really sucks, why would I want to enter my credentials, can’t they just have a cool native pop-up so I can click and authorize it? We’re going to be using the twitter-connect plugin and connecting to twitter using the Fabric SDK, that way, we can give a native experience to our users, instead of making our users think/do too much. In this post, you’ll learn: - How to set up your Fabric account. - How to set up the Twitter Connect plugin (from Ionic Native) - How to use the plugin to get the login token. - Use those token credentials to sign your user into Firebase. You’ll be able to give your users a better experience, similar to the picture below: Ready to get started? Make sure to get the code directly from Github so you can follow along with the post:. Ionic Twitter Login We’re going to break down the process in 6 steps: - Step #1: Set up your Fabric Account. - Step #2: Get your Fabric API Key. - Step #3: Create your app in Fabric. - Step #4: Install the Twitter Connect Plugin - Step #5: Enable Twitter Authentication in Firebase. - Step #6: Write the code to get the token from Twitter and sign the user in. I think that intro was too long, so let’s get busy. Step #1: Set up your Fabric Account. How many times have I said something sucked in this post? This is going to be another one, this process sucks! 😛 There’s a lot of work on setting up your fabric account, including running Android Studio (or Xcode) installing plugins, and running a native app. In the next few lines I’ll do my best to explain it so you don’t have to bang your head against the keyboard (like I did). The first thing you’ll do is go to Fabric’s website and create an account After you signup, they’ll send you a confirmation email so you can get started, go to your email and click on the link they sent you. It will take you to a page where you can start working with Fabric, you’ll add a team name, and then it will ask you what platform you’re developing for, this is where things get tricky. You can either select iOS or Android, one little advise, choose the one where you have the SDK and native IDE up to date, so if your Android Studio installation and Android SDK are up to date, go ahead and pick android, if not, then pick iOS. When you pick the platform you’ll go through an on-boarding process (yup, they actually think this is good), just read through every single message and follow the instructions step by step But basically, you need to install Fabric’s plugin in your IDE, then install the Twitter SDK into your app through the plugin, the plugin will show you how to install it, it even has a one-click install that adds everything (at least on Android it does). Once everything is installed, you need to build and run the app, once you do, it will send a signal to Fabric letting them know that you have successfully installed and ran the SDK, which will let you pass this “pending” page: If it doesn’t happen automatically, feel free to comment and I’ll help you debug. Step #2: Get your Fabric API key I don’t know why, but there’s no easy way to get this, like, I would expect you could go into settings and copy your API key, but no, that’s not what they wanted I guess. Thankfully, Manifest Web Design, the awesome people that wrote the Twitter Connect Plugin already knew this and they had instructions on how to get your API key. - First, go to. - Look for the Add Your API Key block of code - Inside the <meta-data />block, you’ll find the value for your Fabric API key. Easy, right? 😛 Step #3: Create your Fabric APP It’s time to create the app we’ll be using to ask for the user’s permissions, this on the easier side of things, all you have to do is go to and click the “ADD” button, it will ask you for some information and you’ll be able to create the app. The app will have some information on it, you’ll need to copy the CONSUMER KEY and the CONSUMER SECRET, since we’ll need them later for the plugin setup. Step #4: Install the Twitter Connect Plugin Now it’s time to install the Twitter Connect Plugin, for that we first need to have an Ionic Framework app created, if you don’t know how to create an Ionic app and initialize Firebase then first read this post and come back to this after you’re done. Now that your app is created, open your terminal (you should be inside your app’s folder) and install the twitter connect plugin: $ ionic plugin add twitter-connect-plugin --variable FABRIC_KEY=<FabricAPIKey> $ npm install --save @ionic-native/twitter-connect Remember to change <FabricAPIKey> with your own API key (the one we got in Step #2). Once the plugin is installed, you’ll need to do some config, go ahead and open the config.xml file that’s in the project root, and right before the closing </widget> tag, add this: <preference name="TwitterConsumerKey" value="<Twitter Consumer Key>" /> <preference name="TwitterConsumerSecret" value="<Twitter Consumer Secret>" /> Remember to change those values with the CONSUMER KEY and the CONSUMER SECRET we got in Step #3 when we created the app in Fabric. We need to declare the twitter-connect package as a provider in app.module.ts now: import { StatusBar } from '@ionic-native/status-bar'; import { SplashScreen } from '@ionic-native/splash-screen'; import { TwitterConnect } from '@ionic-native/twitter-connect'; @NgModule({ ..., ..., ..., providers: [ {provide: ErrorHandler, useClass: IonicErrorHandler}, StatusBar, SplashScreen, TwitterConnect ] }) export class AppModule {} That’s it, everything in your app is set up and ready to be used. Step #5: Enable Twitter Authentication in Firebase. Now you need to tell your Firebase app to allow users to Sign-In with Twitter, for that go to your Firebase Console Choose your app and inside the Authentication Tab go to “Sign-In Method” and enable Twitter, it’s going to ask you for an API Key and Secret, you’ll use the same you just used, the ones for the app you created in Fabric. Step #6: Write the code to get the token from Twitter and sign the user in We can finally start coding now 🙂 The first thing we’ll do is create a button so our user can log-in, so go ahead and open home.html and remove the placeholder content, then add a button: <ion-content padding> <button ion-button block <ion-icon</ion-icon> Login with Twitter </button> </ion-content> The button is calling a function that we’ll create in the Class that will handle the login part, it also has an ngIf tag, that makes sure you only see the button if you’re logged out (we’ll create that logic later). If the user is logged-in, we want to show the user’s profile picture, twitter name, and full name. <ion-content padding> <button ion-button block <ion-icon</ion-icon> Login with Twitter </button> <ion-item * <ion-avatar item-left> <img [src]="userProfile.photoURL"> </ion-avatar> <h2>{{ userProfile.displayName }}</h2> <h3>{{ userProfile.twName }}</h3> </ion-item> </ion-content> By the end, that page will look something like this: Now that we have that part covered, then it’s time to import everything we’ll need into home.ts import { Component } from '@angular/core'; import { NavController } from 'ionic-angular'; import { TwitterConnect } from '@ionic-native/twitter-connect'; import firebase from 'firebase'; - We’re importing TwitterConnectbecause that’s the ionic native package to handle the plugin. - And we’re importing Firebase so we can sign-in our users. Then, right before the constructor, we need to add one variable: userProfile: any = null; The userProfile will hold the information we want to show about the user. Now initialize the zone variable in the constructor and inject TwitterConnect constructor(public navCtrl: NavController, private twitter: TwitterConnect) {} It’s time to move to our login function, we’re going to create the function and add the login functionality for twitter, go ahead and add this to your code: twLogin(): void { this.twitter.login().then( response => { console.log(response); }, error => { console.log("Error connecting to twitter: ", error); }); } Right there you’ll get all the twitter functionality, go ahead and run it in a phone, you should see the blue login button, when you click it you’ll get a screen like this: If you’re inspecting the device, you’ll notice it logs the response to the console, the response looks something like this: { userName: 'myuser', userId: '12358102', secret: 'tokenSecret' token: 'accessTokenHere' } We need to pass now that token and secret to Firebase so our user can log into our application. For that first, create a credential object using the TwitterAuthProvider method, and then pass that object to Firebase: twLogin(): void { this.twitter.login().then( response => { const twitterCredential = firebase.auth.TwitterAuthProvider .credential(response.token, response.secret); firebase.auth().signInWithCredential(twitterCredential) .then( userProfile => {}); }, error => { console.log("Error connecting to twitter: ", error); }); } We’re using const twitterCredential = firebase.auth.TwitterAuthProvider .credential(response.token, response.secret); To create a credential object and then pass it to the signInWithCredential Firebase method, then we just need to handle the return of that function, we’ll just add the response to the this.userProfile variable so we can use it in our HTML twLogin(): void { this.twitter.login().then( response => { const twitterCredential = firebase.auth.TwitterAuthProvider .credential(response.token, response.secret); firebase.auth().signInWithCredential(twitterCredential) .then( userProfile => { this.userProfile = userProfile; this.userProfile.twName = response.userName; console.log(this.userProfile); }, error => { console.log(error); }); }, error => { console.log("Error connecting to twitter: ", error); }); } We’re also adding this.userProfile.twName = response.userName; because the firebase authentication object doesn’t have that information for us. And that’s it, you now have a fully working authentication system using Twitter and Firebase 🙂
https://javebratt.com/ionic-twitter-login/
CC-MAIN-2017-43
refinedweb
1,843
56.49
The 4s blue flash blues I've got a gpy with a simple lte prgram loaded into flash as main.py (I've removed boot.py). It wakes up, attaches & connects on lte cat M1, uploads a few values, de-connects/attaches then goes to deepsleep for 6 mins. It's supposed to do this indefinitely. Unfortunately the longest I've ever got out of it is 14 hrs. I always find it doing the old 4s blue flash of a regular gpy after it's stopped working. If I cycle the power my program starts again. I'm a bit flummoxed on what to do next. Where is this mysterious program that does the 4s blue flash & how is the gpy running it when my main.py program is sitting in flash as the supposedly preferred option of programs to run? @reidfo No I've got everything in try/except loops so it can't crash. This is always when it wakes from deepsleep, instead of running boot or main .py it jumps into dreaded 4s blue flash mode. Grim! @kjm is it getting stuck trying to attach to LTE possibly? If so, do you have the watchdog timer set? I have a few trouble spots in my code that sometimes get stuck as well, so I make sure that in those loops I do not call WDT.feed(), so the board will reset after a few minutes of being in the stuck state. @kjm Loaded my program into both boot.py & main.py Still ends up up in 4s blue flash mode eventually just takes longer, a day or two. Not being able to run the boot/main.py programs reliably after deepsleep is a show stopper for this device if I can't find a fix. Kicking myself now for not testing this thoroughly first before wasting all that time with the LTE. @kjm Paul I moved the gpy back from the battery we're monitoring to the expansion board. I think I can see where it is getting back to the 4s blue flash cycle. When it wakes up from it's 6min deepsleep it prints ets Jun 8 2016 00:22:57 It's while printing out this wake-from-deepsleep message that it gives a couple of brief blue flashes with an on time about the same as the default 4s blue flash. I think this is where it get's stuck in this mode & fails to finish loading my main.py - Paul Thornton last edited by @paul-thornton main.py runs again after a reset Paul, code on the email with link - Paul Thornton last edited by the blue flash is the "heartbeat" and lives in firmware. It can be disabled by import pycom pycom.heartbeat(False) That said. It doesnt explain why your machine is returning to that state after 14 hours. Does your code begin running again after a reset? Or does it stay missing untill you re upload the code. Would you be able to post the code? If you prefer it to be private you can email it to me at paul@pycom.io and include a link to this thread :)
https://forum.pycom.io/topic/4143/the-4s-blue-flash-blues/8
CC-MAIN-2019-30
refinedweb
532
82.85
_exit, _Exit - terminate the current process #include <unistd.h> void _exit(int status); #include <stdlib.h> void _Exit(int status); void _exit(int status); #include <stdlib.h> void _Exit(int status); The function _exit() terminates the calling process "immediately". Any open file descriptors belonging to the process are closed; any children of the process are inherited by process 1, init, and the processs parent is sent a SIGCHLD signal. The value status is returned to the parent process as the processs exit status, and can be collected using one of the wait() family of calls. The function _Exit() is equivalent to _exit(). SVr4, POSIX.1-2001, 4.3BSD. The function _Exit() was introduced by C99.. execve (2) exit_group (2) fork (2) kill (2) wait (2) wait4 (2) waitpid (2) Advertisements
http://www.tutorialspoint.com/unix_system_calls/exit.htm
CC-MAIN-2017-30
refinedweb
130
57.98
When it comes to accessing Cassandra from Scala there are 2 possible approaches: Custom-DSL are nice as they provide all the type-safety you need against your data schema. However in this post I will focus only on the Java driver. Why? Because it’s both a simple and decent solution in my opinion. The bad thing is that you lose any type-safety as all the queries are just plain strings. On the other hand you don’t have to learn a new DSL because your queries are just CQL. Add a thorough test coverage and you have a viable solution. Moreover the Java driver provides an async API backed by Guava’s futures and it’s not that difficult to turn these futures into Scala futures – which makes a quite natural API in Scala. There are still some shortcomings that you’d better be aware of when consuming a result set but overall I think that it’s still a simple solution that is worth considering. Scala integration of the Cassandra Java driver Writing CQL statements Our goal here is to be able to write CQL statements like this val query = cql"SELECT * FROM my_table WHERE my_key = ?" For that we’ll define our own String interpolation. Looks scary? No worries it pretty easy to do in Scala: import com.datastax.driver.core._ import com.google.common.util.concurrent.ListenableFuture implicit class CqlStrings(val context: StringContext) extends AnyVal { def cql(args: Any*)(implicit session: Session): ListenableFuture[PreparedStatement] = { val statement = new SimpleStatement(context.raw(args: _*)) session.prepareAsync(statement) } } And that’s it. Now let’s see how we can use it. First we need a Cassandra session in the implicit scope to be able to use our CQL strings. implicit val session = new Cluster .Builder() .addContactPoints("localhost") .withPort(9142) .build() .connect() And then we’re ready to go (provided there is a Cassandra instance running on localhost) val statement = cql"SELECT * FROM my_keyspace.my_table WHERE my_key = ?" Nice, exactly what we hoped for! But as a Scala developer you’d rather deal with Scala Futures than the Guava’s ListenableFuture. Integration with Scala Future We can convert a ListenableFuture into a Future by means of a Promise. The idea is to complete the promise from the callback of the ListenableFuture and return the Future of the Promise. import com.google.common.util.concurrent.{ FutureCallback, Futures, ListenableFuture } import scala.concurrent.{ Future, Promise } import scala.language.implicitConversions implicit def listenableFutureToFuture[T]( listenableFuture: ListenableFuture[T] ): Future[T] = { val promise = Promise[T]() Futures.addCallback(listenableFuture, new FutureCallback[T] { def onFailure(error: Throwable): Unit = { promise.failure(error) () } def onSuccess(result: T): Unit = { promise.success(result) () } }) promise.future } We declare the method implicit so that all the ListenableFuture are automatically converted into Scala Future without anything else to do for us. Then we can change the signature of our cql string interpolation to return a Future[PreparedStatement] implicit class CqlStrings(val context: StringContext) extends AnyVal { def cql(args: Any*)(implicit session: Session): Future[PreparedStatement] = { val statement = new SimpleStatement(context.raw(args: _*)) session.prepareAsync(statement) } } Now that we have a PreparedStatement ready we need to execute it somehow. So let’s create a method that binds the PreparedStatement and execute it. import scala.concurrent.{ ExecutionContext, Future, Promise } def execute(statement: Future[PreparedStatement], params: Any*)( implicit executionContext: ExecutionContext, session: Session ): Future[ResultSet] = statement .map(_.bind(params.map(_.asInstanceOf[Object]))) .flatMap(session.executeAsync(_)) If we want to use it we can write something as simple as this (assuming everything is in scope) val myKey = 3 val resultSet = execute( cql"SELECT * FROM my_keyspace.my_table WHERE my_key = ?", myKey ) Pretty neat, isn’t it? Feels very scala-ish. It’s not bad given the small amount of code we just write to improve the java driver integration with Scala. Of course the cql statements are just strings to there is no schema validation whatsoever at compile-time. It can always fail at runtime. That’s why you need a proper test coverage! (or use a third-party library which provides this kind of type-safety). Consuming a Cassandra ResultSet Now that we are up to a point where we can get a ResultSet from Cassandra, let’s see how to extract the results from it. The naive way to extract the rows from the result set would be to do something like this import scala.collection.JavaConverters._ val resultSet = execute(cql"SELECT * FROM my_keyspace.my_table") val rows: Future[Iterable[Row]] = resultSet.map(_.asScala) This code simply converts the result set into an Iterable[Row]. That’s perfectly fine as long as your result set returns only a few rows. If the result sets contains thousands of rows you have to be careful when you consume this result set. For instance one common thing to do is to turn the Cassandra Row into a domain object. val entities = execute(cql"SELECT * FROM my_keyspace.my_table") .map(_.asScala.map(parseEntity)) Assuming parseEntity is a function Row => Entity. What is not obvious here is that the map operation that turns a Row into an Entity will actually consume the whole dataset. Yes, it will loads everything into memory. Why? Because Scala Iterable is strict. However there is an easy way to remedy this problem: Call the view method on this Iterable to make it non-strict. val entities = execute(cql"SELECT * FROM my_keyspace.my_table") .map(_.asScala.view.map(parseEntity)) Alternatively you can turn it into a Stream to achieve the same results val entities = execute(cql"SELECT * FROM my_keyspace.my_table") .map(_.asScala.toStream.map(parseEntity)) That’s much better but there is still something that I don’t quite like: paging. What do I mean? Well, when a result set contains many rows (typically more than 5000) the driver doesn’t fetch all of them at once. Instead it uses paging and returns only the first page of data (i.e. the first 5000 rows). When you iterate over the rows that’s pretty fast as everything is available from memory … until you try to fetch the 5001st row. At this point the driver needs to fetch another page of data (i.e. the next 5000 rows) from the database and this time this is a blocking. (There is no way to get a Future while we are iterating over the rows). In my application is takes about 100 ms to fetch an additional page of data but I’d rather not block my application threads to fetch the database results. Note that the page size is configurable with Statement.setFetchSize. In our implementation that can fit into our execute statement: def execute(statement: Future[PreparedStatement], pageSize: Int, params: Any*)( implicit executionContext: ExecutionContext, session: Session ): Future[ResultSet] = for { ps <- statement bs = ps.bind(params.map(_.asInstanceOf[Object])) rs <- session.executeAsync(bs.setFetchSize(pageSize)) } yield rs That gives us a little room to avoid paging but it’s not a proper solution. We want to use the cassandra session and not our application threads to fetch more data. The ResultSet API has an async method to fetch more results, simply called fetchMoreResults and methods to check wether or not this result set is exhausted, is completely fetched and get the number of rows fetched. ListenableFuture<ResultSet> fetchMoreResults(); boolean isExhausted(); boolean isFullyFetched(); int getAvailableWithoutFetching(); With this we are able to write a function that takes a ResultSet and return a ResultSet with more results. def fetchMoreResults(resultSet: ResultSet)( implicit executionContext: ExecutionContext, session: Session ): Future[ResultSet] = if (resultSet.isFullyFetched) { Future.failed(new NoSuchElementException("No more results to fetch")) } else { resultSet.fetchMoreResults() } So now what? Can we get a get something like an Iterable[Future[ResultSet]]. Well, not quite! In fact it is certainly possible to create such an Iterable but there is no way to end the iteration as we’ll have to wait for the future to complete to know if there is a next element. As the iterable doesn’t wait for the future to complete (because it hasn’t any knowledge of the type of its elements) it returns an “infinite” number of elements. No quite what we want! The observable pattern is exactly what we are after. The Monix library provides a pretty good observable implementation and is part of the typelevel project (of course there are other implementations like RxScala or even reactive streams, …). Looking at the Observable API there is a function of particular interest in our case: def fromAsyncStateAction[S, A](f: S => Task[(A, S)])(initialState: => S): Observable[A] This function allows to generate the elements of an Observable. It takes a function that given a state S returns a Task containing a Pair of an element A and the next state S. We initiate the generation of elements by providing an initial state. I haven’t said anything so far about what a Task is so far. You can think of it as a Future that doesn’t execute automatically when created but only when it is told to. Task is also part of the Monix library. In our case our initialState will be the Future[ResultSet] returned by execute(). And we want to return an Observable[ResultSet]. That means S is Future[ResultSet] and A is simply ResultSet. Everything seems to fit in place quite nicely so let’s write a query function that returns an Observable[ResultSet]. import monix.eval.Task import monix.reactive.Observable def query(cql: Future[PreparedStatement], parameters: Any*)( implicit executionContext: ExecutionContext, cassandraSession: Session ): Observable[ResultSet] = { val observable = Observable.fromAsyncStateAction[Future[ResultSet], ResultSet] { nextResultSet => Task.fromFuture(nextResultSet).flatMap { resultSet => // consume the fetched rows in order to trigger isExhausted (1 to resultSet.getAvailableWithoutFetching) foreach (_ => resultSet.one) Task((resultSet, resultSet.fetchMoreResults)) } }(execute(cql, parameters: _*)) obs.takeWhile(rs => !rs.isExhausted) Not bad but we’re not really interested in the ResultSet itself but in the fetched rows. So let’s change our method to return an observable of Rows instead. def query(cql: Future[PreparedStatement], parameters: Any*)( implicit executionContext: ExecutionContext, cassandraSession: Session ): Observable[Row] = { val observable = Observable.fromAsyncStateAction[Future[ResultSet], ResultSet]( nextResultSet => Task.fromFuture(nextResultSet).flatMap { resultSet => Task((resultSet, resultSet.fetchMoreResults)) } )(execute(cql, parameters: _*)) observable .takeWhile(rs => !rs.isExhausted) .flatMap { resultSet => val rows = (1 to resultSet.getAvailableWithoutFetching) map (_ => resultSet.one) Observable.fromIterable(rows) } } Here we slightly changed our generation function to extract the fetched rows from the result set. That gives us an Observable[Iterable[Row]]. We then flatMap it to get an Observable[Row]. What does this get us? Let’s be honest, probably not much in terms of performance. It’s still going to need the same amount of time to fetch data from Cassandra. The main advantage now is that we’re no longer blocking our application threads to fetch the data. From the client side it becomes quite easy to query a Cassandra table import monix.execution.Ack import monix.execution.Scheduler.Implicits.global // creates an observable of row val observable = query(cql"SELECT * FROM my_keyspace.my_table") // nothing happens until we subscribe to this observable observable.subscribe { row => // do something useful with the row here println(s"Fetched row id=${row.getString("my_key")}") Ack.Continue } Parsing a row Parsing a row is pretty straight-forward. However there is one pitfall that you need to be aware of. That is handling null values. In scala we don’t like null. Instead we can use Option to indicate the absence of a value. A natural thing to do when parsing a row might be something like: val maybeName = Option(resultSet.getString("name")) val maybeAge = Option(resultSet.getInt("age")) What you expect to get here if the name is not set is a None. Which is what happens – if no value is set in Cassandra the java driver returns a null which is turned into None by the Option’s apply method. And we can expect the same thing to happen on the second line for the age. But no, if there is no value set for the age in Cassandra the driver doesn’t return null but 0. So in this case you get a Some(0). In fact you never get a None here. So the correct implementation is: val maybeAge = if (resultSet.isNull("age")) None else Some(resultSet.getInt("age")) Testing We are now approaching the end of this blog post so it’s a good time for a few words on testing. The good thing is that there is a an embedded version named cassandra-unit that you can use to run your tests. It’s pretty easy to setup: import java.net.InetAddress import com.datastax.driver.core.Cluster import org.cassandraunit.utils.EmbeddedCassandraServerHelper import scala.concurrent.duration._ EmbeddedCassandraServerHelper.startEmbeddedCassandra(60.seconds.toMillis) val cluster = new Cluster .Builder() .addContactPoints(InetAddress.getByName("127.0.0.1")) .withPort(9142) .build() implicit val session = cluster.connect() However is damn slow so be careful not to spin up a Cassandra instance for every single test. Instead you can share the same session among tests. This requires that you clean up the data after a test. Using “TRUNCATE table” seems to do a decent job. You should limit your tests to the minimum using Cassandra. Only test your queries and mapping to/from domain objects. Perform an extensive testing of these functions as we don’t have any type-safety here (everything is stringly typed). I think that’s all you need to test using Cassandra. You should be able to tests the remaining of your application without starting up a Cassandra instance.
https://www.beyondthelines.net/databases/querying-cassandra-from-scala/
CC-MAIN-2019-30
refinedweb
2,253
50.23
Scripted playback of 3D moviesdesc Digia Spins Off Qt As Subsidiary This move increases the focus of the Qt team. Most developers know what Qt is, but who can tell off the top of their head what Digia does, and why Qt is strategically important to them? Ask Slashdot: What Are the Strangest Features of Various Programming Languages? XLR () has essentially a single operator, -> which reads as "transforms into". The rest is defined in the library. (OK, to be honest, that's the theory. In practice, the current language implementations take many shortcuts). Normal Humans Effectively Excluded From Developing Software Software complexity follows Moore's law, an exponential law. So with a fixed set of tools, you are bound to reach a point where you can't code effectively. That's why we need either new sets of tools on a regular basis (e.g. C -> C++ -> Java -> ...) or tools that evolve over time (e.g. Lisp). See... for another take at tools that evolve over time. How the Internet Is Taking Away America's Religion Does Relying On an IDE Make You a Bad Programmer? Emacs is complicated enough that it makes you a better programmer just to use it. Ask Slashdot: Why Are We Still Writing Text-Based Code?: With this approach, it is possible to use nice notations for arbitrary concepts. In Taodyne's products, for example, a slide is described by something. FSF's Richard Stallman Calls LLVM a 'Terrible Setback'. Charlie Stross: Why Microsoft Word Must Die Yes, it's possible to do better. But inventing new models is not easy, and it's an uphill battle. GNU Make 4.0 Released See page 184 of the Unix Haters Handbook,. That has to be the most obstinate bug in the world. Helpful comments in the source code: if (wtype == w_eol) /* There's no need to be ivory-tower about this: check. What Are the Genuinely Useful Ideas In Programming? Concept programming is the simple idea that the concept is not the code, and that being aware of the differences matters. See for more details. Social Networks Force Barilla Chairman To Apologize For His Anti-gay Remarks. Iran Plans To Launch an 'Islamic Google Earth'. EA Repeats As 'Worst Company In America' The fact that a video game company was voted worst company in America is ridiculous and would be laughable if it was not so frightening. Come on! Is there nothing more serious on the planet than botching a game release? Aren't companies that fight like crazy to deprive cancer patients from inexpensive treatments a little worse? Or companies who lie to be free to play with your health in the name of profit? Or companies using child labor to lower the price of smartphones? Or simply profitable companies planning massive layoffs? Or media associations with an agenda built on layers of lies? Apparently, for the majority of Slashdot readers, getting a perspective chip would be a good idea. WebKit Developers Discuss Removal of Google-Specific Code One has to wonder. For Jane's, Gustav Weißkopf's 1901 Liftoff Displaces Wright Bros. The page you are referring to is only trying to validate the testimony of various people from that time regarding one specific photo. The photo was lost, but we have lithographs that reportedly were based on it, like this one. So the investigation is only about checking whether witnesses who claim they saw the photo at the 1906 exhibition were credible. It's not inventing the reports, it's checking them. As for the reports, Jane's writes: In short, there are numerous articles indicating that Whitehead achieved sustained controlled flight in 1901, and demonstrated a 360 degrees turn in 1902 with a different plane. Whitehead's planes were taking off the ground under their own power, something that the Wright brothers didn't have in 1903. So why didn't we hear more from Whitehead? It's not a conspiracy theory. To quote Jane's again: Computer History Museum Wants to Preserve Minitel History Minitel was all about a network of services, from phone directory to Minitel Rose (ASCII pr0n). Without recreating the network, the exhibit will show dead hardware, not its original soul Linus Torvalds Explodes at Red Hat Developer Why all the swearing? Isn't Torvalds smart enough to express the exact same idea in a civil manner? To think that there was a big ruckus when Dujardin said "Putain". Physicists still confused over how to interpret Quantum Mechanics Yes, I think physics is confused. My own interpretation is that what is missing is a good definition of what measurements are. See also the more technical formulation. Why Hasn't 3D Taken Off For the Web? Taodyne delivers Tao, a 3D dynamic document description language which is quite a departure from HTML + WebGL for building 3D contents. Based on our experience, here are some of the key attributes you need for good 3D to take off on the web: * Device independence, like PDF or HTML. 3D does not just mean 3D models, but also depth, stereoscopy. You don't want to have to care about the many 3D technologies out there, active, passive, auto-stereoscopic, holographic, whatever. Tao contents adapts transparently, and will look exactly the same on a 2D or 3D display, including 3D without glasses from Alioscopy, Tridelity or Dimenco/Philips. Of course, it degrades gracefully on a 2D screen just like PDF degrades gracefully on a black-and-white printer. * Integration of text, 2D graphics, images, movies and 3D objects in the same 3D scene. We are very far from that in HTML + WebGL, where there is practically zero integration between 2D and 3D contents. In Tao, 2D graphics and text obey the same rotations, translation or scaling as 3D objects. * Being able to mix pre-rendered / filmed 3D movies with real-time 3D contents. In Tao, you can have 3D movie appear on the screen of 3D model of a TV, with text on top of it, all rendered in real-time. And that scene will show correctly even on an Alioscopy screen in glasses-free 3D... * The ability to directly read 3D assets and not just 2D assets. This is almost there for WebGL with Three.js, but still very far from the ease of use of the video tag. By contrast, in Tao, displaying a model that moves with my mouse is nothing more than: import ObjectLoader light 0 light_position 1000, 1000, 1000 rotatey 0.1 * mouse_x object "MyModel.3ds" Right now, Chrome Experiments are proud to announce "Not your morther's Javascript". We should not collectively take pride in having a web that's for experts only. We want to make things easier to create. While the Taodyne 3D dynamic document description language is not available in browsers yet, we clearly see what we did as something that could be part of HTML6. We built it with that in mind. It's text based, and you can reference an URL in images, movies, etc. Actually, we would like nothing better than open-source the whole thing and integrate it with the WebKit, we just don't have the resources to do that at the moment. But if a good soul at Google or Apple is reading this, we can talk. Pope To Resign Citing Advanced Age Wow, is that really your ideal god? Some entity punishing people who don't do what he wants? "Do no evil or you'll get the flu"? How do you envision free-will and true love in your North Korean universe? Would you rather have a god who is "Word" and shares his will as knowledge, or a god who is "Sword" and shares his will with brute force? This issue of free will is addressed as early as Genesis in the Bible. The story of Adam and Eve explains that we are truly free to reject God (something that according to the scripture is not true for all of God's creatures), but also that this freedom has consequences. This was true for Hitler and those who followed us. This was also true for the positive consequences of all WWII soldiers who offered their lives (and too often lost them) for others that they didn't even know.
http://beta.slashdot.org/~descubes
CC-MAIN-2014-42
refinedweb
1,383
73.37
I'm pretty sure someone else has done this, but here's my shot at it: a simplified class to handle INI files with absolutely no API calls at all. This class is a shot at an easy way to create and manage INI files almost effortlessly. The initial version of this class was severely limited: It only took String, Boolean, and Double. That's a rather limited selection of types, and even then, there were a few different bugs with the way it handled parsing an INI file: String Boolean Double The code itself is much easier than before. I took a few ideas from the initial replies in the first article to heart and applied them to the class. Despite the class being rewritten from the ground up, it still has some similarities from the original library. Although, behind the scenes, a few things have changed dramatically. I will explain those later on in the article. For now, let's create a simple INI file, with one section called "Test", and a Key called "StringField" with a Value of "Hello World!". var ini = new INI(); ini.Add("Test"); ini.Add("Test", "StringField", "Hello World!"); That's it! That's all there is to it. You have added a Section, and then given that Section one Key with a Value attached to that Key. The simplest thing now is to access that Section we just created. The INILibrary includes a really simple method of doing just that: var section = ini["Test"]; // or var section = ini.GetSection("Test"); Now, why would I include two different methods of doing this? The first example will never throw an Exception. If a Section does not exist, it will simply add it to the INI, and then return the newly added Section. However, GetSection will throw an Exception if the Section does not exist. I decided to include both methods to give developers a choice in whatever they write when using this library. GetSection As an alternative, you can also get all the Sections in the INI class with a simple call: var sections = ini.Sections; Okay, so, we know how to get a Section. But, what if I want to get a Key now? There are a few different ways you can get Keys as well, all drawing roots from how we got a Section: var key = ini["Test"]["StringField"]; // or var key = ini.GetSection("Test")["StringField"]; // or var key = ini.GetKey("Test", "StringField"); Goodness, that's three different ways to get a simple key! Okay, here's the breakdown: Method 1, just like with the Sections, will never throw an Exception. It will simply add the Key (and/or Section) if it doesn't exist (with the Value as an empty String), and return it. That's it, no Exceptions and no questions. Method 2, as you all should recall, will throw an Exception if the Section doesn't exist. But, it will silently add the Key (with the Value as an empty String) and then give it to you. Method 3 will throw an Exception if the Section or Key do not exist. This was included for the same reason as GetSection was included: To give developers a choice of having methods that throw Exceptions to ones that silently add and continue. Whether or not that is bad design choice now lies in the hands of the developer. GetSection I could go really crazy and give roughly four methods to grabbing Key data, however, I will only give two methods: var value = ini["Test"]["StringField"].Value; // or var value = ini.GetKeyValue("Test", "StringField"); There's a small pattern here that I hope you readers should have recognized by this point: Method 1: If the Key or Section does not exist, it will add both and assign the key an empty String for its Value. Method 2 will throw an Exception if the Section or Key does not exist for again the same reasons that have been explained twice now. New in this version if the INILibrary is the ability to Merge two INI files together. This will, quite simply, take the INI file currently loaded (if applicable), and load in a new INI file. It will merge duplicate sections and ignore keys of the same value that are loaded in. Use this feature at your own risk as it will value the currently loaded INI's field more than the one being imported. Now that that warning is out of the way, here is the easy syntax of Saving, Loading, and Merging: ini.Save("location", Type.INI); ini.Load("location", Type.INI); ini.Merge("location", Type.INI); The "location" refers to the file path to either save to, load from, or merge from. Quite simple and very straightforward. If anyone has requests for additional code examples, or more specific code examples, do not be afraid to ask in the comments as I do check back quite often. This was a fun experiment that really hammered into me that Classes are reference types and Structures are most definitely not. The original design had structures, which I soon found out made manipulating things behind the scene very difficult. It's one reason I can successfully get away with writing code like this with the library: var key = ini["Test"]["StringField"]; key.Value = "newValue"; Console.WriteLine(ini["Test"]["StringField"]); // Outputs "newValue" Console.WriteLine(ini.GetKeyValue("Test", "StringField")); // Outputs "newValue" However, one should take caution with references. It can lead to some unexpected results if not handled with the appropriate care. However, when handled appropriately, it can have some really nice effects in programming. Another thing that should be noted is the heavy use of LINQ I use in the program at certain locations. I use it liberally because of the fact that Classes are still reference types, so why not take advantage of the language features that are at hand? I'm not sure if that's a "Point of Interest", but I think it is to me. XML support is quite limited for the Library. I took the lazy man's route this time around and just used the XmlSerializer instead of writing the document out myself. Merging XML files is -not- supported as of this version. I'm not sure if I can get around to actually supporting the merging myself, but if I can I will incorporate it. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) [Setting] strConnectionString=Data Source=172.16.1.125;Initial Catalog=DB1;User Id=User1;Password=1234; SectionExists FieldExists Time=2,15,30,45 List(of T) Load public T GetValue<T>(string sectionName, string keyName, T defaultValue) { ... get your string value from the ini data ... return ChangeType<T>(key.KeyValue, typeof(T)); } public static T ChangeType<T>(string keyValue, Type type) { return (T)Convert.ChangeType(keyValue, type, Culture); } public void SetValue<T>(string sectionName, string keyName, T value) { sValue = String.Format(Culture, "{0}", value); ... save your string value ... } // if (!SectionExists(section)) throw new SectionDoesNotExistException(section); if (!SectionExists(section)) Add(section); // if (!FieldExists(section, field)) throw new FieldDoesNotExistException(section, field); if (!FieldExists(section, field)) Add(section, field, value); using System.Text; // var rawFileData = System.IO.File.ReadAllLines(iniFile).Where(line => !line.Equals(string.Empty) && !line.StartsWith(";")); var rawFileData = System.IO.File.ReadAllLines(iniFile,Encoding.Default).Where(line => !line.Equals(string.Empty) && !line.StartsWith(";")); // var sw = new System.IO.StreamWriter(iniFile); var sw = new System.IO.StreamWriter(iniFile, false, Encoding.Default); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/318783/Simplified-INI-Handling?PageFlow=FixedWidth
CC-MAIN-2014-41
refinedweb
1,284
65.01
See also: IRC log <ChrisW> Scribe: Allen <sandro> discussion over what the leading "-" means on the UML diagrams? it seems to mean something about public/private -- something we don't care about here. <sandro> (prefixing the names of relations/properties) discussion about UML diagram for structure of RIF Core Rules question why can't we rename implies to rule? <sandro> Christian shows outline of UML from PRR. csma: why forall is a class? <Hassan> Please everyone make sure to turn on your mikes! Thanks. <Hassan> Thanks! csma: what about rule-set? harold: could be a level above paul also questions forall class <Hassan> mike? <sandro> Sandro: "forall" as a class comes from the standard FOL syntactic nesting sandro: this maps to scoping <sandro> Allen: "forall" represents the class of universally quanitified formulas chris: this mirrors fol syntax csma: why is the rule associated with forall instead of implies? <ChrisW> scribenick: Allen harold: keep it general for extensibility <sandro> error in diagram --- forall can take either an implies or a positive --- diagram says it has to have both. harold: positive is a disjunction? csma: should we extend this arbitrary formulas? sandro: at some point yes harold: you need disjunctions for integrity constraints csma: link from forall to postive? hassan: likes this diagram ... this covers prolog class of languagaes nicely chris: but there is still a problem with the diagram csma: that link allows for rules with empty body harold: it is only a matter of brevity of expression csma: straw poll on this 5 prefer as is <sandro> straw poll preference 5-to-3 for having facts as themselves, instead of as degenerate rules 3 prefer remove that link 2 don't care <Harold> Hereditary Harrop Formula: mike: why it a 1 on the formula side? harold: to be consistent with Horn csma: have the same problem as yesterday: syntax vs. metamodel points of view hassan: to harold why do you want this "At all costs" ... don't understand arg for having facts without implies? harold: it make is simpler to write facts csma: so we keep it as it for now dave: but you do need to get the disjunction in there jos: there is a way to do that in uml chris: let's use an intermediate class like in first diagram harold: ok, we can use "clause" csma: clause is either a rule or a fact csma draws diagram on whiteboard sandro recommends some changes mike: what about duplication of positive? harold: it is just a readability thing <Hassan> mikes??? sandro: forall vs. rule, ... rules should be same as formula csma: suppose we need existential rule variables (shared by body and head) harold: that can definitely happen ... it would be side-by-side with forall harold goes to whiteboard harold: ruleset cotains 0 or more of univerally or existen clauses csma redraws diagram <Hassan> It's hard to follow : no mikes for most (except Christian) and no diagrams! csma: postpone decision vis-a-avis core csma describes simpler diagram csma: does any object to having this in core wd1? <Hassan> The description was too fast for me to catch all details ... :-( sandro: where is rule? jos: we need to be consistent. rename ruleset or use something called rule mike: it is odd not to have "rule" <Hassan> I second Mike's point... <sandro> Christian objects to my proposal that Rule==Formula on the grounds that recursion is too much for WD1. csma redraws diagram with rule inserted btwn ruleset and forall <sandro> Christian proposes a replacement version where Rule is a superclass of Forall, but under Forall is the same as before, for now. <Hassan> A pic of updated diagrams would be nice (anyone a camera)? harold: rule is very general includes facts ... and integrity constraints paul is taking a shot of the diagram harold: allows non-ground facts csma: any objections to new diagram? sandro: can a ruleset directly contain a clause? csma: yes sandro: consider the xml csma: concrete syntax not supplied by this diagram <Hassan> (Thanks Paul!) sandro: a fact would still need an empty forall list <sandro> sandro: can a clause be a rule? sandro: you don't want to recurse on rule csma: i don't understand the implications of that ... keep it like this for wd1 harold: I will add some "blue" explanatory text about this csma: no new material ... add new comments to draft in progress, not to released wd1 <Hassan> I do not have the info yet to vote paul: can we identify these diagrams somehow sandro: is fact a superclass of postive? csma: fact is a kind of clause <Hassan> Is there a better name than "positive"? harold: may be a bit redundant csma: we can't resolve all the issues, but can we agree to publish "that one" for wd1 ... prefers to keep "fact" and positive separate to avoid recursion harold: wants to merge them ... but don't call it either of those, might use "litform" csma: straw poll on replacing fact with positive, currently call the merger "positive" <sandro> straw poll: merge Fact and Positive 5 in favor, 3 against, 5 in favor, 3 against csma: put this merger in 1st wd mike does not object no objects to changing csma objects csma withdraws objection harold and michael: we like "Atom" hassan: objects to other names too, like "uniterm" etc csma: not discussing that now csma describes change of fact to atom hassan is ok with publishing in wd1 except for certain names sandro: as its written here the role names are not in diagram hassan: what about uniterm? harold: a "universal term" atom or expression csma: what do we need to add to diagram., roles? harold: we need it to bridge communities hassan: i differ ... if-then in production rules is not implies paul: this is an "Abstract model" for an abstraction... dave: but for nowthe xml syntax would contain these names csma: don't add names for roles for now, avoid contention harold: what about the minus signs sandro: are you proposing giving up mapping to xml csma: no, but don't include names for roles in wd1 <Hassan> I agree with Sandro sandro: first wd should be implementable ... i thought we had an xml syntax from these diagrams, fully striped harold: we need the roles paul: will the syntax use class or role names sandro: both paul: shouldn't class be generic roles specific to domain csma & paul: use body and head for roles paul: the vocabulary can change for other dialects ... atomic formula ok, implies and forall no hassan: antecedent, consequent, var to variable sandro: sympathetic to it, but torn because everyone thinks in terms of if-then john: if-part then-part <sandro> scribe: Sandro <scribe> scribenick: sandro <Harold> Because of their identical content models, Uniterm is unification/merger of an Atom (in the sense of a predicate applied to arguments) and an Expression. As a minor new point, instead of POSITIVE we could say ATOMICFORMULA (in the sense of Uniterm or Equal). csma: (reviewing diagram) ... "decalre" renamed to "variable" ... "implies" to "Conditional" (to match "Atomic") ... "ifpart", "thenpart". Harold: we're had many versions of these names. hard to see all the consequences..... ... I'd object to "ifpart" csma: so we stick with the old names for WD1 ... so back to "if" and "then" and "implies", ... and still "atomic" ... and 'declare' instead of "variable". ... I want it on the record that this was discussed and may be discussed again. we are in no way committed to this version. Hassan, are you calling in? csma is working on getting diagram out in e-mail. vpn troubles. <Hassan> (Received - thanks Chris) PROPOSED: in WD1 we'll publish this diagram, labeled as "still under discussion". RESOLUTION: Use diagram in, in Core WD1, labeled "still under discussion" <Hassan> I can't hear <Hassan> Can someone post a pointer to the topic at hand if there is any (slides maybe?) chris is working on it, Hassan, I think. MikeDean: If there's a language designed for human consumption, than some people will implement it. ChrisW: But it's not important in this WD. <Harold> The RIF Human Readable BNF Syntax was modeled on the OWL Abstract Syntax () regarding its Lisp-like prefix notation and its use of whitespace as separator. csma: the question is how to address comments about BNF. want to call in , MoZ? we're talking about your comments. <ChrisW> slides are up on the wiki csma: What is the proper way to deal with all these comments on the BNF? <Hassan> (thanks - again! - Chris...) Harold: DateTime may be more controvercial? ... We just wanted to just have something like OWL's S&AS abstract syntax. <MoZ> sandro, Zakim France seems full for the moment... MoZ, press 0 for an operator and ask them to add you -- they can over-ride the limit. Oh, Zakim France. Huh..... I dunno about that. <MoZ> sandro, tel:+33.4.89.06.34.99 csma: concrete syntax for types...... we could remove this, or more clearly label it as an examples. michaelKifer: the reason for those is to show people how they play out, in a concrete way. <LeoraMorgenstern> Sorry --- the irc had died on me for a while --- can you tell me which slides we're looking at now? csma: the dangers is that it looks so thorough that it looks like the real syntax. DaveR: Didn't we just agree we were using XML Schema datatypes? <Hassan> Michael: please speek into your microphone - thanks. <Hassan> (I did mean "speak" not "peek" :-) MK: note it as "just for illustrative purposes" csma: that not everything has been fixed, or decided by WG ... Maybe we need to label in the draft which things are decided and which are not...... PROPOSED: The concrete human-readable syntax, described is BNF, is: work in progress and under discussion. (It was already resolved as being For Illustrative Purposes Only). Sandro: (sarcastically) maybe we should label the whole things as a "Working Draft" <ChrisW> hearing noise on phone PROPOSED: The concrete human-readable syntax, described in BNF, is: work in progress and under discussion. (It was already resolved as being For Illustrative Purposes Only). csma: this resolution will let us skip many of the feedback comments. RESOLUTION: The concrete human-readable syntax, described in BNF, is: work in progress and under discussion. (It was already resolved as being For Illustrative Purposes Only). csma: so we can skip some bullets. ... reserved words? mk: not a problem in the XML -- problem in HR syntax. <scribe> ACTION: Harold to fix ForAll, FORALL inconsistencies [recorded in] <rifbot> Created ACTION-244 - Fix ForAll, FORALL inconsistencies [on Harold Boley - due 2007-03-06]. <scribe> ACTION: mkifer to delete DateTime text and use reference XSD instead [recorded in] <rifbot> Created ACTION-245 - Delete DateTime text and use reference XSD instead [on Michael Kifer - due 2007-03-06]. mk: You don't need to type uniterms because you already know type from signature moz: But can you narrow the type? mk: what's the point? moz/mdean: something like integer 1..17 mk: that's already in the languages, as sorts are defined points about xml syntax dtd dropped. default namespace rif = "" <Hassan> (sandro - pls speak UP! :-) sandro: we can get rid of the 01 with permission. ... 01 is the month PROPOSED: that xmlns in WD1 is "" ... that xmlns in WD1 is "" (consdiered preliminary) mk: How do namespaces change when standards change, eg for XML Schema Datatypes? DaveR: There haven't been any new versions... ... in RDF, they decided not to change the namespace, even though they changed the spec --- or you could change the namespace. ... There's no painless answer -- there are tradeoffs. Hassan: We'll need to face that someday -- some kind of versioning control. ... if there are examples in the draft, they should use the NS Harold: No, they'll make it look too official. DaveR: We could just state it wherever we mention the NS -- say that it's implied everywhere else. RESOLUTION: the xmlns to use for WD1 is "" <scribe> ACTION: Harold to change Core to include the xmlns namespace "" [recorded in] <scribe> ACTION: Harold to change Core to include the xmlns namespace "" [recorded in] <rifbot> Created ACTION-246 - Change Core to include the xmlns namespace \"\" [on Harold Boley - due 2007-03-06]. "The sort name should be a URI" DaveR: so use "xsd:integer" instead of "integer" in draft. <ChrisW> noise on the phone (Hassan are you muted?) Jos: *can* be URIs or *must* be URIs? mk: Why? Sandro: It's simpler to *always* use URIs Harold: "import" will need to turn things into URIs. Jos: That's normal & natural PROPOSED: all sorts will be named with URIs Chris: Are there user-defined sorts? mk: I have some language, X, and I have my own sort -- how do I exchange it with someone else. csma: If I defined shopping cards and customers, etc, am I defining sorts??? mk: I don't think so..... (hesitantly) ChrisW: i thought sorts were there for how symbols are categorized in dialects -- in which case requiring URIs is fine. I don't want to force URIs for user-defined types. ... If you want to load in some data model for your application, are you including as sorts ........ ... you do treat user defined types as sorts? mk: The document is silent about that. csma: We said earlier that identifiers would be URIs if they were not local. sandro: sounds like that should extend to sorts. ... if they are local -- you don't interchange them....? <Harold> Sorted logic example -- Schubert's steamroller: csma: depends what you means by "local", cf, local variables. mk: How about we say the sorts RIF-WG defines will be given URIs. DaveR: Sorts as a mechanisms for extending syntaxes .... is different from application-specific types. PROPOSED: Any sort defined in CORE MUST BE identified by a URI. ... Any sort defined in Core MUST BE identified by a URI. RESOLUTION: Any sort defined in Core MUST BE identified by a URI. mdean: Will we use URIs or cURIs, so you can tell whether http is a prefix or a URI scheme? ... so examples should say xsd:integer now. <scribe> ACTION: kifer to make sure sorts are named with curis [recorded in] <rifbot> Created ACTION-247 - Make sure sorts are named with curis [on Michael Kifer - due 2007-03-06]. [i3] done. [i4] already done [i5] what is the sort URI --- is it essenially the string (ie xsd:anyURI), or something else.... mk: I meant it in the sense of xs:anyURI -- an URI is a kind of string. jos: then we don't have a way to use URI to refer to abstract objects. DaveR: there's a big difference between "Jos" and Jos himself -- the signature of a predicate might say it pertains to strings or people.... Allen: (workshop) Dave: There's some muddyness about things vs pages -- that's not what we're talking about here. Jos: this is well understood in RDF (example of different URIs) csma: (incomprehensible) ... a predicate will be in a boolean sort and if it's identified by a URI, then..... predicate-name can be a URI mk: constants that identify cars, constants that identify people, constants that idenfity pencils, ..... ... a database is a bunch of a symbols --- it's in the mind of the creator of the DB that those symbols are associated with people, etc. <Harold> Following up on the discussion yesterday, and what Jos just indicated, the URIs, and are all different as xsd:anyURIs but equivalent as RDF URIrefs. DaveR: Suppose I'm writing a library of builtins. I'd write signatures for those functions. I want to create a strlen builtin, and some that apply to real-world things. csma: first case sort is URI, second case sort is a Resource. Dave: I think we need "Resource" as another sort. mk: anyURI --- elements of the sort have internal structure (eg schema, path, host), and may have a method toString, and it can have a method "fetch". URI and String are different, but can be converted to each other. Dave: Fine -- but that's all different from Resource. mk: If you are using a URI to denote a person, that's your business, as in a db. jos: Not true. In XSD an anyURI denotes itself, it cannot denote a person. mk: but in a database it can. Jos: We are not talking about databases here. <AxelPolleres> if I might hook in here, I think that making this difference between resource and URI-typed literals in RDF doesn't seem to be such a good idea and makes quite some troubles, IMO. but this just as a side note. <Harold> Besides proceeding from string-like anyURIs to equivalence URIrefs classes, we need also need 'dereference' URIrefs. The semantics for this dereferencing depends on the URI sort: for URIs denoting individuals, dereferencing just moves towards the semantic domain element; for URIs denoting another RIF Ruleset, dereferencing could be regarded as a importing it. <AxelPolleres> ... well, but I see the point (of jos, dave) Dave: example of RDF: "someURI"^^xs:anyURI vs someuri Jos: I'm not sure we need a sort for this. These are just constants. sandro: is there a universal sort? <Harold> Sandro, we considered to introduce a universal rif:Any sort. <AxelPolleres> owl:Thing? <AxelPolleres> maybe not.... mk: I we're making statements about Chris, and he has a URI, why can't I say he's an anyURI ? Jos: This is the usual way. Abstract domain and concrete domain. <Harold> Axel, there was a discussion about 2 months ago with Dave about owl:Thing perhaps being rif:Any, but then he brought in rdf:Resource... Jos: people are in abstract domain, concrete domain might have a URI in it. csma: two separate discussions. 1 -- "URI" sort in core is xs:anyURI -- agreement **YES** ... 2 -- do we need a Resource sort some day -- unknown. <AxelPolleres> thanks harold, can you paste the uri to the thread maybe? Dave: the sort here might be rdfs:Resource, but I'm not sure that's exactly what we need here. ... but I think we're tabling this for now. Jos: Why have anyURI in there? It's pretty obscure. Just have strings. <Hassan> For what it is worth, I agree with Jos... <Harold> Axel and Dave, I guess it was off-line, so if Dave is fine, I will search my mailbox and forward to you and everyone interested. Sandro: it's just a subclass of string. Why bother? MikeDean: Actually it's not a subclass of string. Sandro: Ah, okay. Still, it' kind of obscure. Jos: I think all the text about URIs in the Core is based on this misunderstanding. <Harold> In anyURI is *a sibling of* string (it's not *a* string). Chris: We just recently agreed that sorts in Core would be named with URIs..... is that related? mk: No. ... it's a name which looks like a URI csma: we need it if we have predicates that apply to URIs. +1 mk: what sorts do predicate names come from? eg, maybe we want to restrict it to strings that look like URIs. <Hassan> Jos: anyURI has a value space ... for naming predicated, we want strings, not anyURIs ... just quote the RDF specs about what URIs are -- don't use anyURIs. mk: we might want to allow, eg, integers as names of predicates, but not floating point numbers. so for this kind of thing, we want URIs here. Jos: use URIReference as in RDF <Hassan> Very good analysis Dave! I agree ... Dave: we don't have "this is a predicate, and here is its identifier _____" ---- we're talking about the mechanism. <Hassan> To rephrase Dave's in French: "Nous mettons la charrue avant les boeufs!" ("we worry about the plow before the we have oxen!") mk: we just need a lexical space, without any associated bagage of equality in the value space, etc. ... if you don't have sorts, then anything can be used in any contexts. Sorts allow us to say URIs can be used to name predicates, but for instance that floating point numbers cannot. <allen> check out section 6.4 of <Hassan> don't URIs have a canonical form? I don't think so, Hassan. <Harold> An equality theory for URI should look into rfc3986 "Uniform Resource Identifier (URI): Generic Syntax" (). <DaveReynolds> The XSD section is at: Jos: typical use of sorts is just syntactic disambiguation -- however, we've also been using it for XML schema datatypes which suggests the value space semantics ... Two URIs for the same person cannot be stated to be equal because of course the strings are not equal. mk: ah ha! <Hassan> Sandro: isn't 6.2.2. Syntax-Based Normalization in the link Harold just posted defining such a canonical form? sorry Hassan, I'm scribing. or trying to scribe csma: rif:URI as sub-sort of xs:string Jos: but we need to be explicit about them being interpreted in some abstract domain. mk: If we're are talking about the sort of integers, than all the equalities in xsd should be there. csma: but not for uris. Jos: just have to be careful not to use any unsorted names. mk: all constants are sorted. ... So..... ... we'll have to define our own URI sort, with the lexical space coming from RFC 3986. Dave: When push comes to shove, we'll have two different things here, with different value space. Chris: The difference between a URIRef and a Resource. Dave: Yes. <Harold> Dave, isn't this like What is in the middle of "Paris"? csma: let's raise an issue on this. <Harold> (The distinction between names and their denotations has been discussed in philosophy for a while.) <scribe> ACTION: Deborah to raise issue on rif:URI sort [recorded in] <rifbot> Created ACTION-248 - Raise issue on rif:URI sort [on Deborah Nichols - due 2007-03-06]. <Harold> s/"Paris?"/"Paris"?/ PROPOSED: replace uri with rif:URI in WD1 and link to issue. RESOLUTION: replace uri with rif:URI in WD1 and link to issue. <Hassan> when do we reconvene? <scribe> ACTION: mkifer to update Core with rif:URI and link to ussue. [recorded in] <rifbot> Created ACTION-249 - Update Core with rif:URI and link to ussue. [on Michael Kifer - due 2007-03-06]. <Hassan> thanks - bon appetit Reconvene at 1:30 (eastern(Session continues after lunch break <johnhall> scribe: johnhall ChrisW: start with DAve Reynolds i6 ... integer and decimal make more sense? josb: just use integer and decimal sandro: can' just change the charter josb: charter says integer csma: at least int chrisW; charter required inte, is this proposal to support at least 'long'? chrisw: go back to charter and discuss adding others for next WD mk: implement long, have inplemented integer? daveR: integer/decimal pair is sensible mk: double or float exist and can be taken as decimal ... ... in fact decimal requires lot of work chrisW: go back to charter josb: charter includes ' decimal' chrisW: anyon object to adding decimal? no objections daveR: also deal with float and double chrisW: resolved - leave draft as is? RESOLUTION: keep text as in draft, which changes datatype list from charter by replacing int with integer. csma: charter "other primitive sorts ..." DaveR i7 DAveR; had not defined RuleSet scribe: now we have DaveR: Issue in WD after second picture ChrisW: add placeholder "WG has still to discuss ordering"? josb: discussed in last F2F ,,, decided on not ordering harold: 'ordered' could be XML attribute chrisW: action on MK and Harold to replace diagram and remove issue DAveR i8 DAveR: for WD2 <ChrisW> ACTION: harold to delete the issue below the rule diagram [recorded in] <rifbot> Sorry... I don't know anything about this channel chrisW: postpone, also i9 <sandro> rifbot, help? <rifbot> See for help (use the IRC bot link) <sandro> ACTION: Sandro to rest rifbot [recorded in] <rifbot> Created ACTION-250 - Rest rifbot [on Sandro Hawke - due 2007-03-06]. <sandro> ACTION: harold to delete the issue below the rule diagram [recorded in] <rifbot> Created ACTION-251 - Delete the issue below the rule diagram [on Harold Boley - due 2007-03-06]. chrisW: someone edit wiki page as we go? Harold volunteers chrisW: focus mainly on green highlighted issues and respond ... address the first on for WD1? csma: could it be resolved by just adding a sentence ... ? mk: could say that dialiect is a logic-based language csma: prefer 'rule-based' chrisW: remove 'rule-based'? ... doesn't bother me harold: rule language? chrisW: 'rule-based' aand remove green second green issue - fix agreed first issue in section 2 scribe: The following paragraph should be elsewhere. chrisW: remove following paragraph correction - just remove para in green daveR: some ed corrections - e.g. wrong URIs and suggestions for rephrasing ... para below links, strike para re. examples mk: in core - have we decided? chrisW: just strike examples? ... talking about eaxmples as well as core ... Delete blue text and presceding sentence ... fix "to support the web ..." mk: will do off-line harold: the parenthetical remarks ... remove "striped" and related issue first green issue in "SYNTAX" chrisw: remove reference to stripe skipping? csma: BNF is instantiated into concrete syntax ... but we need to explian that it is not a transformation ... does not belong in the WD anymore MK: ahreed that metamodel cannot be used to generate syntax chrisw: do not have to explian the algorithm csma: but may have to add some comments second green issue in SYNTAX <Harold> The concrete human-readable syntax, described in BNF, is: work in progress and under discussion. (It was already resolved as being For Illustrative Purposes Only). chrisW: new para before the BNF box ... and delete the green mk: it needs to be there csma: we know we need to fix it chrisW: if we have a BNF syntaxt it needs to be a good one next green issue "Currently CONSTNAME is undefined..." chrisW: move to next next green issue "Should we allow certain special characters ..." chrisW: can remove criticisms of BNF - we know it has to be fixed harold: anonymous veriables were rejected csma: we can deal with the action later ... we can deal with issues and remove some of the colored text, but not all actions Semantic Structure csma: blues boxes to end notes chrisw: we should merge conditions with 'rule' section ... found section names confusing harold: remove parentheses csma: cannot see different levels in headings chrisW: need to raise the levels ... need to see what are subsections of what csma: can it be done offline? daveR: "Other primitive sorts that are likely to be incorporated include long, double, date, and duration." ... delete 'duration' mk: is needed daveR: we will fixit but xsd:duration is not the answer Issue "Need to provide BNF and XML syntax for arrow/Boolean sorts here" MK: remove issue issue: "Need to decide if sort symbols are also coming from Const." harold: action 247 mk: did not decide where to define sort URIs <sandro> CURIE reference seems to be chrisW: delete all three green issues ... we have sections on sorted and unsorted core csma: we decided some weeks ago to do this chrisW: unsorted core semantics are irrelevant ... there only for explanation ... requires a big fix to move from 'how to add sorted to unsorted' harold: add a subheading? ... main heading 'Semantic Structures' applies only to first para <scribe> ACTION: fix heading structure on MK [recorded in] <rifbot> Sorry, couldn't find user - fix <sandro> ACTION: mkifer to fix heading structure [recorded in] <rifbot> Created ACTION-252 - Fix heading structure [on Michael Kifer - due 2007-03-06]. mk: what are W3C onventions for headings? csma: have to check chrisW: Now in 'rules' "RIF RULE LANGUAGE" josb: resolved with RIF core to cover Horn logic, not higher order ACTION on MK to add words on predicates, functions constant symbols, disjoint sorts MOF/UML metamodel chrisW: .. extending the metamodel of positivre conditions is show below "SYNTAX" chrisW: delete text, update symbols in examples csma: and add words on "workin in progress ..." "The following extends the mapping in 'Positive Conditions' ..." chrisW: "The following extends the example syntax in Positive Conditions ..." and delete the DTD sentence "SEMANTICS" chrisW: blue text becomes end note "RIF Compatibility" chrisW: remove "here" in RIF-OWL and RIF-RDF compatibility WIKI-TR diagnostics <ChrisW> hassan, are you there now? <LeoraM> Hassan got out around 45 minutes ago, I think ... <LeoraM> I got off the phone around 20 minutes ago or so ... <LeoraM> It was getting hard to follow ... chrisW: actions to be completed by ...? josb: done harold: at least one week chrisw: can work tomorrow on this ... we also have architecture and RIFRAF ... new UML diagrams? harold: not yet MK: not much time next week csma: telecon 2 weeks from now? MK: March 16 csma: for new version chrisw: what kind of review to accept WD? ... for example - vote now to accept subject to harold and michael completing actions? DaveR: see frozen doc and vote at telecon chrisw: telecon on 27 March? ... review is go/no go ... prefer not another round ... can accept subject to typos csma: what would cause "no"? chrisW: actions unfulfilled ... no new issues ... working draft to let the world know what we are doing <sandro> PROPOSED: to publish Core WD1, pending actions performed as discussed so far this meeting. josb: new material - 2 paras harold: fix in f2f csma: have modified metamodel ... whole doc did change chrisW: but changes agreed <sandro> PROPOSED: to publish Core WD1, if ACTIONS assigned in this meeting so far are done to our satisfaction. (That is, no new issues should arise to block publication of Core WD1) csma: clarification - if actions are done, accept document? chrisW: yes csma: actions done to WG's satisfaction RESOLUTION: to publish Core WD1, if ACTIONS assigned in this meeting so far are done to our satisfaction. (That is, no new issues should arise to block publication of Core WD1) chrisw: any objections to resolution? RESOLVED chrisw: new draft for March 16, one week for review <sandro> expected vote to publish on the 27th. chrisw: vote to publish March 27 <PaulVincent> scribe: PaulVincent <scribe> scribe: PaulVincent External Data Model breakout <apologies for delay in scribing - restarted IRC> Mike: does "external" include OWL etc? yes Jos: what vocabs are required and how much is required in RIF? ... need for vocab translation as part of RIF role? Christian: example: shopping cart domain + rules to be interchanged reference domain object model - do they use the XML schema directly or translate to a form for interchange? ... one option is just to adopt a single data model used in interchange -- so burden is on implementer / translation which implies a new translator for each application Jos: different (use) cases require different treatments for vocabularies Mike: XML schema can be much harder than OWL/RDF for translators Christian: an XML schema representing a data model [eg ACORD insurance model supported by rule tools from ILOG and Fair Isaac] Paul: XML schema for domain specific languages represents a data model + vocabulary for the domain Reference: for ACORD / insurance industry Jos: lightweight approach: rules use vocab with particular URIs relevant to a schema Christian: problem with this approach: does not fit model ie predicates Paul: existing BREs use an object mapping mechanism to map disparate object/data/other data models to an OO model referenced by rules Christian: qu how to map a relational (data ) model to the RIF Condition Metamodel Andreas: Can use graph-directed model to represent other models Jos: OWL-DL maps to relational model ... RDF is not just a graph... Christian: what is OWL compatibility for RIF? OWL and RDF data is a part of the overall problem ... most industry-specific models are relational and therefore can map to the RIF Condition Language metamodel Mike: ... but the metamodel displayed does not go into the detail for data model issues Christian: how does RIF hook into externally defined data models? ... mapping an object model into a standardized model may be too expensive from a translator perspective <Christian waves hands in front of screen> Christian: ... or can users plug in own data models Jos: They can already plug in their own models via URIs Christian: plug-in issue is that the plug-in interpreter takes on the cost of interpretation and needs to be the same on both provider and consumer of RIF Correction: Christian: enforcement of a relational versus OO versus other model will be a translation issue Jos: these concerns re Core may be pointless as Core is of limited practicability Christian: ... but principles apply to all dialects ... assumption that there will be 2 customers who often share data model types Andreas: RDF - data and meaning layers - may be way to go here John: issue is that domain specific languages need to be usable directly in order to allow adoption <johnhall> That wasn't quite my concern. I said it would be unfortunate if RIF actually precluded organizations from using solutions already in place. Jos: which dialects require this issue Christian: statement "if they have the same object model they don't need RIF" is wrong as they still need to interchange rules Jos: ... but you also need things like variables Allen: is this RIF Core? Phase 2? ... a new requirement not in RIF at present Christian: need to enumerate mappings for external data Mike: note even several mappings for RDF and tools like JESS Christian: Example: XBRL for financial reporting: have a complex structure, interchange rules as text Jos: propose: 2 dimensions; type of vocab language + degree of integration in RIF Christian: how do we define compliance if there is a plug-in environment <LeoraM> +1 with Mike Dean's suggestion to ground this in a concrete example Jos: RDFs requirements are needed <LeoraM> +1 also to instantiating the use cases Mike: need to ground requirements in expanding use cases <breakout sessions end; main session reconvenes> Summary by Jos of breakout for external data models 1. Definition of external data models: data structure / vocab eg XML schema or OWL 2. How would data structure be represented in RIF rules 3. Proposed: plug-in for external data models 4. Should not focus on RIF Core limitations ie other dialects may require OO data structures 5. May need special treatments for RDFS and OWL 6. Working group needs some requirements for external data models use in RIF 3. correction: proposal was to indicate range of options from plug-in for arbitrary models to mapping everything to a single Core data model <allen> dru Dave: does this include option of eg using a single a URI to reference to what you mean eg complex types <allen> McCandless, Dru <sandro> thanks. Dave: coverage of RDF and XML should cover most options Christian: need examples to better understand mapping needs <sandro> RIF Syntax Breakout Summary by Chris of the breakout for syntax 1. Different paradigms between metamodels and ASN abstract syntax - metamodel includes items not in syntax 2. Sandro can now generate near-UML diagrams from ASN06 so publication should specify these as "not metamodel" 3. From ASN06 will generate XML schema as XML syntax specification 4. Need for human-readable presentation syntax <sandro> PROPOSED: We'll use UML to help people visualize our abstract syntax -- but we'll be clear that it's not a metamodel. <sandro> PROPOSED: We'll use UML to help people visualize our abstract syntax -- but we'll be clear that these UML diagrams are not metamodels 5. Discussion on presentation syntaxes - Sandro will provide some examples to be generated from ASN06 (as "RIF Presentation Syntax") Hassan: is there a BNF/grammar for ASN06 - yes - so Hassan can implement an XML output too <sandro> PROPOSED: We'll use UML to help people visualize our abstract syntax -- but we'll be clear that these UML diagrams are not metamodels Hassan: need semantics for ASN to be able to discuss Chris: abstract syntax is not normative <sandro> Chris: I want these not to confuse people used to metamodels. <sandro> Chris: I want them not to find them lacking. <sandro> csma: These are graphical views of the abstract syntax using UML notation. <sandro> Sandro: it's not all of UML, but we what UML we use should be correct. <sandro> PROPOSED: We'll use UML to help people visualize our abstract syntax. We'll say "these are graphical views of the abstract syntax using UML notation". RESOLUTION: We'll use UML to help people visualize our abstract syntax. We'll say "these are graphical views of the abstract syntax using UML notation". <sandro> PROPOSED: we need a presentation syntax Christian: viewing a RIF Presentation Syntax example: would keep roles not classes <sandro> PROPOSED: we need a presentation syntax -- to be used for examples and in the specification of the semantics. Harold: Presentation Syntax is WD2 and later Chris: this is not normative at this point in time (although examples etc in future will need a presentation syntax) <end of F2F5 day2> This is scribe.perl Revision: 1.128 of Date: 2007/02/23 21:38:13 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/horne/Horn/ Succeeded: s/mike objects/mike does not object/ Succeeded: s/for not /for now/ Succeeded: s/Zakim/Zakim France/ Succeeded: s/Feedback from Moz on Core/Reader Feedback on Core/ Succeeded: s/use/user/ Succeeded: s/canocical/canonical/ Succeeded: s/strings/uris/ Succeeded: s/"Paris?"/"Paris"?/ FAILED: s/"Paris?"/"Paris"?/ Succeeded: s/cold/could/ Succeeded: s/taling/talking/ Found Scribe: Allen Inferring ScribeNick: allen Found ScribeNick: Allen Found Scribe: Sandro Inferring ScribeNick: sandro Found ScribeNick: sandro Found Scribe: johnhall Inferring ScribeNick: johnhall Found Scribe: PaulVincent Inferring ScribeNick: PaulVincent Found Scribe: PaulVincent Inferring ScribeNick: PaulVincent Scribes: Allen, Sandro, johnhall, PaulVincent ScribeNicks: allen, sandro, johnhall, PaulVincent Default WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Got date from IRC log name: 27 Feb 2007 Guessing minutes URL: People with action items: deborah fix harold kifer mkifer sandro[End of scribe.perl diagnostic output]
http://www.w3.org/2005/rules/wg/wiki/F2F5/27-rif-minutes.html
CC-MAIN-2013-48
refinedweb
6,359
62.98
#include <genesis/utils/io/gzip_stream.hpp> Inherits ostream. Output stream that offers on-the-fly gzip-compression. The class accesses an external std::streambuf. It can be constructed from an existing std::ostream (such as std::cout) or std::streambuf. The GzipOStream destructor flushes all reamining data to the target ostream. However, if the ostream needs to be accessed before the GzipOStream is destroyed (e.g., goes out of scope), the GzipOStream::flush() function can be called manually. The class is based on the zstr::ostream class of the excellent zstr library by Matei David; see also our Acknowledgements. If genesis is compiled without zlib support, constructing an instance of this class will throw an exception. Definition at line 193 of file gzip_stream.hpp. Definition at line 516 of file gzip_stream.cpp. Definition at line 522 of file gzip_stream.cpp. Definition at line 528 of file gzip_stream.cpp.
http://doc.genesis-lib.org/classgenesis_1_1utils_1_1_gzip_o_stream.html
CC-MAIN-2020-45
refinedweb
148
61.33
Big Trouble in the Big Apple: Fear Meets Faith By Nik Milanovic, Business Manager, published October, 2010 In tough economic times, people reach for two things: their wallets and their guns. In the case of the Tea Party and the GOP this season, the wallet-reflex has manifested itself in the form of cries for reduced taxes and shrieks of indignation at government spending. The gun-reflex has much more prominently reared its head in the form of unabashed xenophobia. Any American with a minimal attention span for domestic news has by now heard of the Arizona immigration law. He or she has been subjected to campaigns from Wisconsin to Georgia deriding immigrants, legal and illegal, for their “parasitic leeching” of America’s resources. But any sensible rationale for intolerant legislation has been thrown clear out the window in the latest incidence of racist fanaticism regarding the construction of a recreation center in New York. The center, called Park 51 and based on the 92nd St. Y (a Jewish community center – the Y was short for Young Men’s Hebrew Association), would deliver an open space replete with numerous services and facilities for the community. The plans include a basketball court, swimming pool, auditorium, meeting rooms, a garden, classrooms with public classes, and a memorial to 9/11. The controversy arising over the planned construction of Park 51 is due to what was initially known as the Cordoba House, an initiative which would use part of the building as a prayer space for Muslims. It is estimated that New York may be the city with the most Muslims in the western hemisphere. Various discussion groups with New Yorkers of different ethnicities and faiths all yielded the same answer: they wanted a community center in Lower Manhattan. It just so happens that the site for this center is two blocks from the old site of the Twin Towers: Ground Zero. The Cordoba House is the idea of Imam Feisel Abdul Rauf, a well-intentioned American cleric who wants to use the space to promote interfaith dialogue and understanding. Ironically, the planned prayer room is now a touchstone for intolerance and invective rhetoric: the antithesis of dialogue. Conservative commentators have referred to a ‘Mega Mosque’ being built on ‘Ground Zero,’ in order to misconstrue the facts and incite the undercurrent of xenophobia gaining steam in America. Protestors have compared the construction to building a memorial to Hitler next to Auschwitz or building a monument to terrorism. Most surprising is the reaction of national Tea Party and GOP leaders against the planned recreation center. Former vice-presidential candidate Sarah Palin called on Muslims to ‘refudiate’ construction (she clarified that she meant refute, though she likely meant repudiate, and then compared herself to Shakespeare.) She, along with Arizona Senator John McCain, former Massachusetts Governor Mitt Romney, and a host of other senators and congressmen, have called the plan ‘insensitive’ to Americans and 9/11 families. In their defense, their concern is about sensitivity and the possible reactions and harm that could result from the construction. However, they still make the fallacy of equating terrorism to Islam, implying that dedicating a space to Islam would insult those victimized by terrorism. Even the moderate concerns about hurt feelings and people’s reactions still excuses Americans for equating the two. The most inflammatory remarks came from former Speaker of the House Newt Gingrich, a prospective Republican presidential candidate who also dubbed the proposal an “aggressive” act equivalent to a “Nazi sign next to the Holocaust Museum.” He added to his remarks by asserting that there should be no mosque by Ground Zero so long as there are no churches or synagogues in Saudi Arabia.” (This slogan also appeared on many protest placards.) Americans are quickly abandoning their pride in the USA’s reputation as a safe haven for people of other cultures and countries. The message of such prominent GOP and Tea Party icons is clear: America is now closed. Please get out and take your culture with you. The comparison to Saudi Arabia highlights an obvious loss of pride in the United States: Americans no longer hold their country to a higher standard than a repressive dictatorship. If they can be intolerant, then so can we. Not so far from the proposed mosque site stands a statue, for generations a symbol of America’s promise to the world. Engraved on the base of the statue is a sonnet with the famed message, “Give me your tired, your poor, your huddled masses yearning to breathe free.” The statue is named Liberty, and she gazes out at the Atlantic with a torch to light the way for other cultures, welcoming them. Perhaps from the base of that statue, you can hear the chants of the protestors, the invective of the Tea Party and GOP. Their message is just as clear: leave our country. Your faith, your beliefs, and your culture are not welcome here. This land is not for you. 3 Comments » - Tweets that mention Big Trouble in the Big Apple: Fear Meets Faith « The Stanford Progressive -- Topsy.com, November 2, 2010 @ 12:25 pm [...] This post was mentioned on Twitter by Roosevelt Wright, Jr, Apple News & Feeds. Apple News & Feeds said: Big Trouble in the Big Apple: Fear Meets Faith [...] [...] Originally @ Stanford Progressive [...] Is there any information about this subject in other languages? Share this Article Top News - Gabbard says US undermining North Korea peace bid with Venezuela, Iran policies - Fox News - Colorado Avalanche's comeback nets 3-2 OT win - Boulder Daily Camera - A goddess inspired a Taiwanese billionaire to follow Trump's example - CNN - Democrats outraged as Trump team shapes Mueller report rollout - CNN - Samsung speaks up about broken Galaxy Fold review units - Engadget - Madeira bus crash: At least 29 killed on tourist bus - BBC News - Scientists revive cellular activity in brain of dead pigs: report - Fox News - Facebook says it 'unintentionally uploaded' 1.5 million users' email contacts without permission - CNBC - Man caught with 2 gas cans entering St. Patrick's Cathedral in NYC, police say - Fox News - Pinterest prices public offering at $19 per share - Alabama's News Leader?) :
https://web.stanford.edu/group/progressive/cgi-bin/?p=961
CC-MAIN-2019-18
refinedweb
1,026
58.21
Ernesto Emmanuel Gutierrez Muñoz13,520 Points Exercise Model ASP.NET MVC is wrong in the second step i need to make a read only property so i did and gives me an error i think people of treehouse that you have an error in your code here is my code of how i did namespace Treehouse.Models { public class VideoGame { public int Id { get; set;} public string Title {get; set;} public string Description {get;set;} public string[] Characters {get;set;} public string Publisher {get;set;} //read only property public string DisplayText { get { return Title + "(" + Publisher + ")"; } } } } Please Help me 1 Answer Steven Parker200,418 Points To prevent things from running together, you need to add a space in front of the open parenthesis when you build up "DisplayText". Other than that, good job! Ernesto Emmanuel Gutierrez Muñoz13,520 Points Ernesto Emmanuel Gutierrez Muñoz13,520 Points Thanks man i didn't pay enough attentiion but with your help i solved the problem Thanks a lot man
https://teamtreehouse.com/community/exercise-model-aspnet-mvc-is-wrong
CC-MAIN-2020-29
refinedweb
164
57.13
22 June 2012 12:50 [Source: ICIS news] LONDON (ICIS)--The European benzene spot market is totally detached from movements in the value of Brent crude because the market is so tight, sources said on Friday. Crude prices hit their lowest level since 2010 on Thursday but this did little to the price of June benzene, which moved up on Friday morning to $1,245-1,300/tonne (€996-1,040/tonne) CIF (cost and freight) ARA (Amsterdam Rotterdam Antwerp). However, the current spot value of benzene for June business was $50/tonne below the levels recorded in the market a week ago. Nonetheless, sources said there was not a “single molecule” available for June and for the first half of July spot prices were also firm at $1,160-1,200/tonne, but then heavily backwarded for August. August benzene was trading in a $1,040-1,090/tonne CIF ARA price range. Despite reports of some 49,000 tonnes of imports arriving in Europe from Asia and the ?xml:namespace> Some 30,000 tonnes are imported per month and all of this material has been sold, as had the remaining 19,000 tonnes, a source said. In relation to the unabated strength of spot benzene prices, a trader said: “It’s simple - there are no molecules and today is the last date of official nominations. Crude can go down as much as it wants but people will still buy [benzene] at these levels. “All the 49kt that is coming is sold and there really is no liquidity in the market,” the trader added. Various planned and unplanned outages in Dow, Total and ExxonMobil are all named as producers who have been in shutdowns, but none of these outages have been confirmed at source. ExxonMobil was planned while the others were not, according to market followers. All three facilities are due to come back onstream sometime next week, but even if this is the case, sources believe that the additional capacity will not have any significant bearing on the price of spot benzene until
http://www.icis.com/Articles/2012/06/22/9572131/europe-benzene-spot-detached-from-crude-price-movements.html
CC-MAIN-2013-48
refinedweb
345
63.22
Three Useful Monads Note: before reading this, you should know what a monad is. Read this post if you don't! Here's a function half: And we can apply it a couple of times: half . half $ 8 => 2 Everything works as expected. Now you decide that you want to log what happens in this function: half x = (x `div` 2, "I just halved " ++ (show x) ++ "!") Okay, fine. Now what if you want to apply half a couple of times? half . half $ 8 Here's what we want to have happen: Spoilers: it doesn't happen automatically. You have to do it yourself: finalValue = (val2, log1 ++ log2) where (val1, log1) = half 8 (val2, log2) = half val1 Yuck! That's nowhere as nice as: half . half $ 8 And what if you have more functions that log things? There's a pattern here: for each function that returns a log along with a value, we want to combine those logs. This is a side-effect, and monads are great at side effects! The Writer monad The Writer monad is cool. "Hey dude, I'll handle the logging," says Writer. "Go back to your clean code and crank up some Zeppelin!" Every writer has a log and a return value: data Writer w a = Writer { runWriter :: (a, w) } Writer lets us write code like this: half 8 >>= half Or you can use the <=< function, which does function composition with monads, to get: half <=< half $ 8 which is pretty darn close to half . half $ 8. Cool! You use tell to write something to the log. And return puts a value in a Writer. Here's our new half function: half :: Int -> Writer String Int half x = do tell ("I just halved " ++ (show x) ++ "!") return (x `div` 2) It returns a Writer: And we can use runWriter to extract the values from the Writer: runWriter $ half 8 => (4, "I just halved 8!") But the cool part is, now we can chain calls to half with >>=: runWriter $ half 8 >>= half => (2, "I just halved 8!I just halved 4!") Here's what's happening: >>= magically knows how to combine two writers, so we don't have to write any of that tedious code ourselves! Here's the full definition of >>=: Which is the same boilerplate code we had written before. Except now, >>= takes care of it for us. Cool! We also used return, which takes a value and puts it in a monad: return val = Writer (val, "") (Note: these definitions are almost right. The real Writer monad allows us to use any Monoid as the log, not just strings. I have simplified it here a bit). Thanks, Writer monad! The Reader Monad Suppose you want to pass some config around to a lot of functions. Use the Reader monad: The reader monad lets you pass a value to all your functions behind the scenes. For example: greeter :: Reader String String greeter = do name <- ask return ("hello, " ++ name ++ "!") greeter returns a Reader monad: Here's how Reader is defined: data Reader r a = Reader { runReader :: r -> a } Reader was always the renegade. The wild card. Reader is different because it's only field is a function, and this is confusing to look at. But we both understand that you can use runReader to get that function: And then you give this function some state, and it's used in greeter: runReader greeter $ "adit" => "hello, adit!" So when you use >>=, you should get a Reader back. When you pass in a state to that reader, it should be passed through to every function in that monad. m >>= k = Reader $ \r -> runReader (k (runReader m r)) r Reader always was a little complex. The complex ones are the best. return puts a value in a Reader: return a = Reader $ \_ -> a And finally, ask gives you back the state that was passed in: ask = Reader $ \x -> x Want to spend some more time with Reader? Turn up the punk rock and see this longer example. The State Monad The State monad is the Reader monad's more impressionable best friend: She's exactly like the Reader monad, except you can write as well as read! Here's how State is defined: State s a = State { runState :: s -> (a, s) } You can get the state with get, and change it with put. Here's an example: greeter :: State String String greeter = do name <- get put "tintin" return ("hello, " ++ name ++ "!") runState greeter $ "adit" => ("hello, adit!", "tintin") Nice! Reader was all like "you won't change me", but State is committed to this relationship and willing to change. The definitions for the State monad look pretty similar to the definitions for the Reader monad: return: return a = State $ \s -> (a, s) >>=: m >>= k = State $ \s -> let (a, s') = runState m s in runState (k a) s' Conclusion Writer. Reader. State. You added three powerful weapons to your Haskell arsenal today. Use them wisely. Translations This post has been translated into: Human languages: If you translate this post, send me an email and I'll add it to this list!
http://adit.io/posts/2013-06-10-three-useful-monads.html
CC-MAIN-2017-51
refinedweb
847
80.41
Input: Array of n integers, containing numbers in the range [1,n]. Each integer appears once except A which repeats twice and B which is missing. Output: Return A and B. Example: Input:[3 1 2 5 3] Output:[3, 4] A = 3, B = 4 My approach: public class Solution { // DO NOT MODIFY THE LIST. IT IS READ ONLY public ArrayList<Integer> repeatedNumber(final List<Integer> A) { //O(n) solution //Store the count of all the numbers appearing in the list int [] count = new int[A.size()]; int rep_num = 0,miss = 0; //Increase the count at the index location for( int i = 0; i < A.size(); i++ ) { int ind = A.get(i); count[ind - 1]++; // if(count[ind - 1] == 2) { rep_num = ind; break; } } //If the count has not been updated, then the number is missing in the array for(int i = 0; i < count.length; i++ ) { if(count[i] == 0) miss = i+1; } ArrayList<Integer> num = new ArrayList<Integer>(); num.add(rep_num); num.add(miss); return num; } } I have the following questions: How can I further optimize my code? Is there any better way to solve this question (i.e. using a better data structure, lesser lines of code)? Question asked on: interviewbit!
https://extraproxies.com/return-the-repeated-number-and-the-missing-number/
CC-MAIN-2021-04
refinedweb
203
66.23
I am trying a run a program with a simple list object in my code. But it throwing the same error again and again. The error is: list object cannot be interpreted as an integer python. I am attaching my code snippet below: def userNum(iterations): testList = [] for i in range(iterations): a = int(input("Enter a number for sound: ")) testList.append(a) return testList print(testList) def playSound(testList): for i in range(testList): if i == 1: winsound.PlaySound("SystemExit", winsound.SND_ALIAS) What could be the possible solution for this program? If possible figure the error, please. Thanks Every time you get the error from the compiler or the interpreter you should read it carefully. The error clearly says that you did something that’s the reason the interpreter couldn’t interpret the object as an Integer. So, where are you wrong? The range() is expecting an integer argument, from which it will build a range of integers. You can’t put a list inside the range(), it can’t handle the list. range() So, if you want to access the items in testList, loop over the list directly, just use: List >>> testList [1,2,3,4] >>> for i in myList: print (i) I hope you get the point. Thanks.
https://kodlogs.com/37966/list-object-cannot-be-interpreted-as-an-integer-python
CC-MAIN-2021-21
refinedweb
211
73.37
docx (OOXML) to html converter Project description Convert a docx (OOXML) file to semantic HTML. All of Word formatting nonsense is stripped away and you’re left with a cleanly-formatted version of the content. Usage >>> from docx2html import convert >>> html = convert('path/to/docx/file') Running Tests for Development $ virtualenv path/to/new/virtualenv $ source path/to/new/virtualenv/bin/activate $ cd path/to/workspace $ git clone git://github.com/PolicyStat/docx2html.git $ cd docx2html $ pip install . $ pip install -r test_requirements.txt $ ./run_tests.sh Description docx2html is designed to take a docx file and extract the content out and convert that content to html. It does not care about styles or fonts or anything that changes how the content is displayed (with few exceptions). Below is a list of what currently works: - Paragraphs - Bold - Italics - Underline - Hyperlinks - Lists - Nested lists - List styles (letters, roman numerals, etc.) - Tables - Paragraphs - Tables - Rowspans - Colspans - Nested tables - Lists - Images - Resizing - Converting to smaller formats (for bitmaps and tiffs) - There is a hook to allow setting the src of the image tag out of context, more on this later - Headings - Simple headings - Root level lists that are upper case roman numerals get converted to h2 tags Handling embedded images docx2html allows you to specify how you would like to handle image uploading. For example, you might be uploading your images to Amazon S3 eg: Note: This documentation sucks, so you might need to read the source. import os.path from shutil import copyfile from docx2html import convert def handle_image(image_id, relationship_dict): image_path = relationship_dict[image_id] # Now do something to the image. Let's move it somewhere. _, filename = os.path.split(image_path) destination_path = os.path.join('/tmp', filename) copyfile(image_path, destination_path) # Return the `src` attribute to be used in the img tag return '' % destination html = convert('path/to/docx/file', image_handler=handle_image) Naming Conventions There are two main naming conventions in the source for docx2html there are build functions, which will return an etree element that represents HTML. And there are get_content functions which return string representations of HTML. Changelog - 0.2.3 - There was a bug with hyperlinks that had a break tag in them. The document would fail to convert. This issue has been fixed. - 0.2.2 - There was a bug with hyperlinks that were missing text. The document would fail to convert. This issue has been fixed. - 0.2.1 - If a list had an inconsistency in the ilvls, the content for the inconsistent ilvl would be lost. Now we roll that inconsistent list into the root, no longer losing the content. - 0.2.0 - If a list had a numId that was not stored in the numbering dict, then a key error would be thrown. Now if either the numId or the ilvl for a given list tag is invalid it defaults to returning a list type of decimal. - 0.1.11 - Sometimes in the OOXML an image will have a height or width of 0. If this happens we are now ignoring the height and width in the OOXML and using the full image instead. - 0.1.10 - Added a user facing version - 0.1.9 - There was a problem for some lists that would cause missing content if the list id’s were not well behaved. This issue has been addressed. - 0.1.8 - Fixed missing content with hyperlinks with more than one run tag and smartTags. - Certain image types are now being ignored. These include: emf, wmf and svg. - 0.1.7 - If the indentation level of a set of lists (with the same list id) were mangled (Starting off with a higher indentation level followed by a lower) then the entire sub list (the list with the lower indentation level) would not be added to the root list. This would result in removing the mangled list from the final output. This issue has been addressed. - 0.1.6 - Header detection was relying on case. However it is possible for a lower case version of headers to show up. Those are now handled correctly. - 0.1.4 - Added a function to remove tags, in addition stripped ‘sectPr’ tags since they have to do with headers and footers. - 0.1.3 - Hyperlinks with no text no longer throw an error - Fixed a bug with determining the font size with an incomplete styles dict - 0.1.2 - Fixed a bug with determining the font size of a paragraph tag - 0.1.1 - Added a changelog - Styles are now stripped from hyperlinks - jinja2 is now used to render test xml - 0.1.0 - Correctly handle tables and paragraphs in lists. Before if there was a table in a list it would break the list into two halves, the half before the table and the half after the table (with the table inbetween them). Now if there is a table or paragraph in a list those elements get rolled into the list. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/docx2html/
CC-MAIN-2020-40
refinedweb
850
63.8
In the past few months we’ve really been fleshing out our cli and surfacing more and more Azure services to your fingertips, literally! In this release there is a lot of goodness, I think you are going to like what you see! Here’s a quick summary of what you should expect to find when grab the latest bits. - Custom website deployment “azure site deploymentscript”. - Mobile Service support “azure mobile”. - Windows installer. - Service Bus “azure sb”. - Storage accounts “azure account storage”. Custom Website Deployment – Deployment your way One of the common requests we hear is that folks want to customize how their code is deployed in their Azure Websites. For example, some folks want to run a custom step on the server every time they deploy to a staging environment. In this release, we’re introducing a new command for doing just that, “azure site deploymentscript”. This will generate for you a script either in bash or cmd format which contains all the server logic that will execute when your site is deployed. You can then easily customize that script to your heart’s content. When you use this feature, you really are too cool for school. To see it in action, I am going to show you how to enable a common scenario, that is make my mocha scripts run for my node app every time I push via git. Assume I created a simple express hello world by running the “express” cli. I then created a dummy test.js which will always fail. Now in my app folder, I run the deploymentscript command specifying a node app and to create a bash script. You can see the generated script here in this gist: by looking at the last file. If you look at the script you’ll see several different sections. - #Prerequisites – This section defines code that should run before anything else. For a node app here is where it detects if node is installed.* - #Setup – This handles setting up script variables or installing other necessary modules. For example you’ll see it installs the node “kudusync” module which is used for moving files from where the files are pushed to the target website. - #Deployment – This handles actually deploying the code (calling kudusync) and other steps like calling npm for node modules. *Note: There is a bug currently in the bash version of this, which uses the “where” command to find node which does not work on bash. The fix is in the gist above at line 24. One great thing about this script, is it is designed to actually allow you to run it locally. It detects whether you are in the local folder and will just create an artifacts folder for the files that are copied over. This is really useful when developing. For my unit test scenario I want to do things a little differently than the generated script does by default. - I want it to install mocha, the same way it installs kudusync - I want it to run npm before it copies the files to the website, not after. - I then want it to run mocha and if the tests fail I want the deployment to abort. Doing that means moving a few things around and adding some new steps. In order to make it also work locally there’s some light mental gymnastics, but nothing rocket science. The final script is the top file in the gist revisions () I’ve annotated the parts I changed with “# gb:” and thanks to github’s awesome diff feature you can easily see the comments and what was changed. Now when I deploy the site, it runs my tests and I get the output right when I git commit. Notice above that my tests failed and my website was not updated. This is just one of many scenarios that are now opened up to you through the new custom deployment feature. Mobile Services With this release we’re bringing you commands for Windows Azure Mobile Services so you can create mobile back ends for your Win 8, Windows Phone, IOS and Android applications right from the shell. You can create new services, provision databases, create and query tables, manage scripts and more with just a few commands! Below you can see that I am creating a new Mobile Service using “azure mobile create”. In the create call I’ve passed my service name, user and password and that’s it. As you can see it has created a new service, and deployed a new SQL Server and Database. You can also pass parameters in like –sqlServer and –sqlDb to use existing servers and databases. Next I use “azure mobile table create” to create two new tables and display their details. Now that I have tables, I can even upload a script using the “azure mobile script upload”, list existing scripts with “azure mobile script list” and download using “azure mobile script download”. With my scripts in place, I am now ready to build my mobile app to consume my mobile service. To talk to the service I need my application url and key, which I can get thanks to the “azure mobile show” command. For this example, I built a very simple example that looks a whole lot like our hello world that we create in the portal only morphed to a mini contact manager :-) Now that I have data there’s one more bit of icing on the cake, I can query my tables right from the cli! This is just a preview of the features “azure mobile” offers. Type “azure mobile –help” to see what else is in the box. CLI Installer for Windows With this release we’re introducing a one click, no hassle installer for Windows. That means if you are not a node developer, you don’t have to worry about installing, node, getting the module from npm, etc. Use the installer and everything is taken care of for you. Download the installer here. Storage Accounts and Service Bus If you are using our SDKs to talk to Windows Azure Storage or Service Bus then you are most likely using the Azure portal to create your storage account or namespace and to manage your keys. No longer is that the case! You can now use the cli for both. Storage Accounts Below you can see where I am creating a new storage account using “azure account storage create” and then retrieving the storage keys. Service Bus Next I am creating a new Service Bus namespace using “azure sb namespace create”. Notice you get returned the connection string which you can then pass directly to our SDKs when you use them. You can also retrieve the namespace information (including connection string) at any time by using the “azure sb namespace show” command. More good stuff on the way Over the coming months, you are going to see some really useful (at least we think so) features finding their way into our new CLI. For example today you can only work with accounts and subscriptions, but in the future you’ll be able to also work directly with Storage and Service Bus like uploading blobs, querying tables or even sending messages. And just to whet your appetite, here is a preview of one more very useful feature you’ve been asking for ;-). (Make sure to read what it says by ‘.description’)
https://azure.microsoft.com/en-us/blog/azure-cli-0-6-9-ships-pure-joy/
CC-MAIN-2018-13
refinedweb
1,238
69.92
A* Pathfinding A* (pronounced A Star) is an algorithm that considers traversable and non-traversable nodes while finding the shortest distance between 2 points. It’s widely used in tile-based games. There are loads of resources for this on the web already, though in my efforts, I was unable to find a pure Swift solution, so I translated one from various sources, primarily referring to this Flash implementation by Joseph Hocking. Note: This is part 3 of an ongoing series on isometric game dev. The series begins with part 1, which you can find here. If you are new to the series, you may want to start from the beginning, however, if you are looking specifically for an A* Pathfinding tutorial, then you’re in the right place. You can download the sample project here and pick up the code from this stage in the series. Alternatively, if you’re following the series and you’ve completed parts 1 and 2, you can continue on with your own code, or if you’d prefer, you can download the source material above and go from there. Ok, let’s get started. Open your IsoGame xCode project and run your app. You should see something like this: Our droid is moving to our touch location in our isometric view, but he currently moves through walls without any resistance. Implementing pathfinding will ensure that he respects the boundaries of the level, while moving to the touch location via the most direct traversable path. We’re going to keep all our pathfinding code in 1 file, so select File > New > File, then select iOS > Source > Swift File and click Next. Name the file PathFinder and click Create. In PathFinder.swift, replace the contents with this code: import UIKit import SpriteKit class PathFinder { let moveCostHorizontalOrVertical = 10 let moveCostDiagonal = 14 var iniX:Int var iniY:Int var finX:Int var finY:Int var level:[[Int]] var openList:[String: PathNode] var closedList:[String: PathNode] var path = [CGPoint]() init(xIni:Int, yIni:Int, xFin:Int, yFin:Int, lvlData:[[Int]]) { iniX = xIni iniY = yIni finX = xFin finY = yFin level = lvlData openList = [String: PathNode]() closedList = [String: PathNode]() path = [CGPoint]() //invert y coordinates - pre conversion (spriteKit inverted coordinate system). This PathFnding code ONLY works with positive (absolute) values iniY = -iniY finY = -finY //first node is the starting point let node:PathNode = PathNode(xPos: iniX, yPos: iniY, gVal: 0, hVal: 0, link: nil) //use the x and y values as a string for the dictionary key openList[String(iniX)+" "+String(iniY)] = node; } func findPath() -> [CGPoint] { searchLevel() //invert y cordinates - post conversion let pathWithYInversionRestored = path.map({i in i * CGPoint(x:1, y:-1)}) return pathWithYInversionRestored.reverse() } func searchLevel() { var curNode:PathNode? var endNode:PathNode? var lowF = 100000 var finished:Bool = false for obj in openList { let curF = obj.1.g + obj.1.h //currently this is just a brute force loop through every item in the list //can be sped up using a sorted list or binary heap, described //example if (lowF > curF) { lowF = curF curNode = obj.1 } } if (curNode == nil) { //no path exists! return } else { //move selected node from open to closed list let listKey = String(curNode!.x)+" "+String(curNode!.y) openList[listKey] = nil closedList[listKey] = curNode //check target if ((curNode!.x == finX) && (curNode!.y == finY)) { endNode = curNode! finished = true } //check each of the 8 adjacent squares for i in -1..<2 { for j in -1..<2 { let col = curNode!.x + i; let row = curNode!.y + j; //make sure on the grid and not current node if ((col >= 0 && col < level[0].count) && (row >= 0 && row < level.count) && (i != 0 || j != 0)) { //if traversable, not on closed list, and not already on open list - add to open list let listKey = String(col)+" "+String(row) if ((level[row][col] == Global.tilePath.traversable) && (closedList[listKey] == nil) && (openList[listKey] == nil)) { //prevent cutting corners on diagonal movement var moveIsAllowed = true if ((i != 0) && (j != 0)) { //is diagonal move if ((i == -1) && (j == -1)) { //is top-left, check left and top nodes if (level[row][col+1] != Global.tilePath.traversable //top || level[row+1][col] != Global.tilePath.traversable //left ) { moveIsAllowed = false } } else if ((i == 1) && (j == -1)) { //is top-right, check top and right nodes if (level[row][col-1] != Global.tilePath.traversable //top || level[row+1][col] != Global.tilePath.traversable //right ) { moveIsAllowed = false } } else if ((i == -1) && (j == 1)) { //is bottom-left,check bottom and left nodes if (level[row][col+1] != Global.tilePath.traversable //bottom || level[row-1][col] != Global.tilePath.traversable //left ) { moveIsAllowed = false } } else if ((i == 1) && (j == 1)) { //is bottom-right, check bottom and right nodes if (level[row][col-1] != Global.tilePath.traversable //bottom || level[row-1][col] != Global.tilePath.traversable //right ) { moveIsAllowed = false } } } if (moveIsAllowed) { //determine g var g:Int if ((i != 0) && (j != 0)) { //is diagonal move g = moveCostDiagonal } else { //is horizontal or vertical move g = moveCostHorizontalOrVertical } //calculate h (heuristic) let h = heuristic(row: row, col: col) //create node and add to openList openList[listKey] = PathNode(xPos: col, yPos: row, gVal: g, hVal: h, link: curNode) } } } } } if (finished == false) { searchLevel(); } else { retracePath(endNode!) } } } // Calculate heuristic //Diagonal Shortcut method (slightly more expensive but more accurate than Manhattan method) //Read more on heuristics here: func heuristic(#row:Int, col:Int) -> Int { let xDistance = abs(col - finX) let yDistance = abs(row - finY) if (xDistance > yDistance) { return moveCostDiagonal*yDistance + moveCostHorizontalOrVertical*(xDistance-yDistance) } else { return moveCostDiagonal*xDistance + moveCostHorizontalOrVertical*(yDistance-xDistance) } } func retracePath(node:PathNode) { let step = CGPoint(x: node.x, y: node.y) path.append(step) if (node.g > 0) { retracePath(node.parentNode!); } } } class PathNode { let x:Int let y:Int let g:Int let h:Int let parentNode:PathNode? init(xPos:Int, yPos:Int, gVal:Int, hVal:Int, link:PathNode?) { self.x = xPos self.y = yPos self.g = gVal self.h = hVal if (link != nil) { self.parentNode = link!; } else { self.parentNode = nil } } } As complex as it appears, the code above is more or less, a straight forward Swift implementation of A* Pathfinding. It won’t really make sense without first understanding how the A* algorithm works and there are plenty of good resources already available on the web that cover this. If you want to customise your A* code, or have a curiosity for its workings, A* Pathfinding for Beginners by Patrick Lester is a great place to start. Otherwise, the code here will be enough to get you up and running. You’ll notice some errors popping up in xCode, referring to unresolved identifier ‘Global’. So let’s tend to that now. Select File > New > File, then select iOS > Source > Swift File and click Next. Name the file Global and click Create. In Global.swift, replace the contents with this code: import Foundation struct Global { struct tilePath { static let traversable = 0 static let nonTraversable = 1 } } All we’ve done here is create some global static constants for traversable and nonTraversable tiles. We’ll be using these values in our GameScene as well as our PathFinder class, hence, we’ve made them globally accessible. You should now see the errors in your PathFinder class have disappeared. Ok, now we have our powerful PathFinder class but how do we put it to use? Well, you’ll notice that the Pathfinder init(...) method takes 4 Ints (xIni, yIni, xFin, yFin) and a lvlData array of type [[Int]]. The 4 integers are the x and y coordinates for the starting point and destination point of our path, easy enough. The lvlData array is a 2 dimensional array that maps the traversable and nonTraversable tiles. Let’s start by building a method that converts our games tiles array: [ [)] ] into our pathfinders traversable lvlData format: [ [1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1], ] where 0 = traversable and 1 = nonTraversable In GameScene.swift just under the sortDepth() method, add this function: func traversableTiles() -> [[Int]] { //1 var tTiles = [[Int]]() //2 func binarize(num:Int) ->Int { if (num == 1) { return Global.tilePath.nonTraversable } else { return Global.tilePath.traversable } } //3 for i in 0..<tiles.count { let tt = tiles[i].map{i in binarize(i.0)} tTiles.append(tt) } return tTiles } In the above code, we are doing the following: - Initiating our temporary tTiles array, to store our traversable values. - Here we built a nested function that will binarize our values. It returns 1 as 1, then any other number becomes a value of 0. Why are we doing this? Because our droid tile in the games tiles array has a value of 2, but we know that where ever the droid is placed, is a traversable area (ground), so we substitute a 0 to mark that position as traversable. Also, note that we’re using our Global static constants that we setup earlier. - We then iterate through the tiles array and use the map method and our nested binarize function, to populate our tTiles array. Next, add this function directly below the traversableTiles() function you just added: func findPathFrom(from:CGPoint, to:CGPoint) -> [CGPoint]? { let traversable = traversableTiles() //1 if (Int(to.x) > 0) && (Int(to.x) < traversable.count) && (Int(-to.y) > 0) && (Int(-to.y) < traversable.count) { //2 if (traversable[Int(-to.y)][Int(to.x)] == Global.tilePath.traversable ) { //3 let pathFinder = PathFinder(xIni: Int(from.x), yIni: Int(from.y), xFin: Int(to.x), yFin: Int(to.y), lvlData: traversable) let myPath = pathFinder.findPath() return myPath } else { return nil } } else { return nil } } This method takes a 2 CGPoints (e.g. from: the droids current location, to: user touch location). It then formats these points into separate x and y values, so it can feed them (and the traversable tiles array) to our PathFinder class. Broken down: - Check the to CGPoint (e.g. user touch) is within the boundaries of our map/level. - Check the to CGPoint (e.g. user touch) is on a traversable tile. (If the user touches a nonTraversableTile e.g. a wall, then we obviously can’t move the droid to that destination). - We then instantiate our PathFinder class with our formatted values and run the findPath method to retrieve the path. Next, add this function directly below the findPathFrom(...) function you just added: func highlightPath2D(path:[CGPoint]) { //clear previous path layer2DHighlight.removeAllChildren() for i in 0..<path.count { let highlightTile = SKSpriteNode(imageNamed: textureImage(Tile.Ground, Direction.N, Action.Idle)) highlightTile.position = pointTileIndexToPoint2D(path[i]) highlightTile.anchorPoint = CGPoint(x: 0, y: 0) highlightTile.color = SKColor(red: 1.0, green: 0, blue: 0, alpha: 0.25+((CGFloat(i)/CGFloat(path.count))*0.25)) highlightTile.colorBlendFactor = 1.0 layer2DHighlight.addChild(highlightTile) } } Don’t worry too much about what we’re doing in this function. Its purpose is to highlight our path in our view2D, it will aid our understanding in this tutorial but it’s not necessarily something you’d include in your final game. Once you’ve added the highlightPath2D(...) code, you should see a few errors pop up in xCode, that’s because we haven’t created the layer2DHighlight instance yet. Let’s quickly do that now. At the top of your GameScene class, under the line: let view2D:SKSpriteNode add this line: let layer2DHighlight:SKNode then in the init(..) method, under this line: view2D = SKSpriteNode() add this line: layer2DHighlight = SKNode() then in the didMoveToView(..) method, under this line: addChild(view2D) add this code: layer2DHighlight.zPosition = 999 view2D.addChild(layer2DHighlight) lastly, navigate down your class, and add this code directly after your PointIsoTo2D(...) method: func point2DToPointTileIndex(point:CGPoint) -> CGPoint { return floor(point / CGPoint(x: tileSize.width, y: tileSize.height)) } func pointTileIndexToPoint2D(point:CGPoint) -> CGPoint { return point * CGPoint(x: tileSize.width, y: tileSize.height) } These functions convert coordinates from pixels to tile position (index) e.g. given our map and tileSize, these coordinates (x,y) are in pixels: (0, 0), (32, 0), (64, 0), (96, 0), (128, 0), (160, 0) (0, 32), (32, 32), (64, 32), (96, 32), (128, 32), (160, 32) (0, 64), (32, 64), (64, 64), (96, 64), (128, 64), (160, 64) (0, 96), (32, 96), (64, 96), (96, 96), (128, 96), (160, 96) (0,128), (32,128), (64,128), (96,128), (128,128), (160,128) (0,160), (32,160), (64,160), (96,160), (128,160), (160,160) where these coordinates (x,y) are in tile index: (0,0), (1,0), (2,0), (3,0), (4,0), (5,0) (0,1), (1,1), (2,1), (3,1), (4,1), (5,1) (0,2), (1,2), (2,2), (3,2), (4,2), (5,2) (0,3), (1,3), (2,3), (3,3), (4,3), (5,3) (0,4), (1,4), (2,4), (3,4), (4,4), (5,4) (0,5), (1,5), (2,5), (3,5), (4,5), (5,5) Right, with all our conversion methods setup, we can now use our PathFinder. In our touchesEnded(...) method, we’re going to remove our basic positioning code and replace it with our new positioning code that utilises our PathFinder. Update your touchesEnded(...) method to look like this: override func touchesEnded(touches: Set<NSObject>, withEvent event: UIEvent) { ////////////////////////////////////////////////////////// // Original code that we still need ////////////////////////////////////////////////////////// let touch = touches.first as! UITouch let touchLocation = touch.locationInNode(viewIso) var touchPos2D = pointIsoTo2D(touchLocation) touchPos2D = touchPos2D + CGPoint(x:tileSize.width/2, y:-tileSize.height/2) ////////////////////////////////////////////////////////// // PathFinding code that replaces our old positioning code ////////////////////////////////////////////////////////// //1 let path = findPathFrom(point2DToPointTileIndex(hero.tileSprite2D.position), to: point2DToPointTileIndex(touchPos2D)) if (path != nil) { //2 var newHeroPos2D = CGPoint() var prevHeroPos2D = hero.tileSprite2D.position var actions = [SKAction]() //3 for i in 1..<path!.count { //4 newHeroPos2D = pointTileIndexToPoint2D(path![i]) let deltaY = newHeroPos2D.y - prevHeroPos2D.y let deltaX = newHeroPos2D.x - prevHeroPos2D.x let degrees = atan2(deltaX, deltaY) * (180.0 / CGFloat(M_PI)) actions.append(SKAction.runBlock({ self.hero.facing = self.degreesToDirection(degrees) self.hero.update() })) //5 let velocity:Double = Double(tileSize.width)*2 var time = 0.0 if i == 1 { //6 time = NSTimeInterval(distance(newHeroPos2D, hero.tileSprite2D.position)/CGFloat(velocity)) } else { //7 let baseDuration = Double(tileSize.width)/velocity var multiplier = 1.0 let direction = degreesToDirection(degrees) if direction == Direction.NE || direction == Direction.NW || direction == Direction.SW || direction == Direction.SE { //8 multiplier = 1.4 } //9 time = multiplier*baseDuration } //10 actions.append(SKAction.moveTo(newHeroPos2D, duration: time)) //11 prevHeroPos2D = newHeroPos2D } //12 hero.tileSprite2D.removeAllActions() hero.tileSprite2D.runAction(SKAction.sequence(actions)) //13 highlightPath2D(path!) } } OK, so there’s quite a bit going on here, let’s break it down: - Get our path using our PathFinder class via our findPathFrom(…) method. - Declare our variables that will be used through our iteration of our path array. newHeroPos2D will store our destination for each iteration, prevHeroPos2D will store our current position for each iteration. The actions array, will be a a list of actions we build, that we will ultimately run as a sequence on our hero.tileSprite2D. - Execute our iteration, we begin with the 1 index (not 0), as the 0 index of the path array coordinates will be close to, if not, exactly match our current position. - Get the angle the same way we did in our original positioning code. We then append a runBlock action to our actions array that will update our hero to face the given direction. - Establish our desired velocity and initiate the time var. - If it’s the first iteration (i == 1), then we are moving our hero from an unknown random position, to a set tile position from the path array. This means the distance could be of any random quantity, so we calculate it accurately to get our value for time. - Once we’ve calculated the first iteration, all following distances will be from tile to tile. So it’s either a straight move (vertical or horizontal), in which case our baseDuration is unaffected (multiplier = 1.0)… - …or it’s a diagonal move, so we multiply the baseDuration by 1.4 to keep velocity consistent over the slightly greater distance. Note: the 10:14 ratio (or 1.0:1.4) is used as a low cost approximation of a squares hypotenuse. It avoids having to process Pythagoras theorem (a*a + b*b = c*c) which is more accurate but more CPU intensive. - We then adjust the baseDuration by our multiplier, to get our time var for the iteration. - Append the moveTo action to our actions array. - We’re now done with our postions for this iteration, so our newHeroPos2D becomes our prevHeroPos2D, ready for use in the next iteration. - Once we’ve iterated through all the nodes in our path array, we run our collective actions as a sequence on our hero sprite. - Finally, we highlight the path in our 2D view, so we can clearly see the path that our PathFinder chose. Run your App. Now, when we direct our heroic droid through the interior wall, he refuses to do it. Instead, he finds his way around it. You can see the pathFinder‘s returned path highlighted in the 2D view. Let’s alter the level design to further test our new functionality. In GameScene.swift change your tiles array so it reads like this: tiles = [[(1,7), (1,0), (1,0), (1,0), (1,0), (1,0), (1,0), (1,0), (1,1)]] tiles.append([(1,6), (0,0), (0,0), (0,0), (0,0), (0,0), (0,0), (0,0), (1,2)]) tiles.append([(1,6), (0,0), (2,2), (0,0), (0,0), (0,0), (0,0), (0,0), (1,2)]) tiles.append([(1,6), (0,0), (0,0), (0,0), (0,0), (1,5), (1,4), (1,4), (1,5)]) tiles.append([(1,6), (0,0), (0,0), (1,7), (0,0), (0,0), (0,0), (0,0), (0,0)]) tiles.append([(1,6), (0,0), (0,0), (1,6), (0,0), (0,0), (0,0), (0,0), (0,0)]) tiles.append([(1,6), (0,0), (0,0), (1,5), (1,4), (1,4), (1,1), (0,0), (0,0)]) tiles.append([(1,6), (0,0), (0,0), (0,0), (0,0), (0,0), (1,2), (0,0), (0,0)]) tiles.append([(1,6), (0,0), (0,0), (0,0), (0,0), (0,0), (1,3), (0,0), (0,0)]) tiles.append([(1,5), (1,4), (1,4), (1,3), (0,0), (0,0), (0,0), (0,0), (0,0)]) While we’re at it, lets move the views around a bit to prevent overlapping. In the didMoveToView(...), remove these 2 lines: view2D.xScale = deviceScale view2D.yScale = deviceScale and in its place, put this code: let view2DScale = CGFloat(0.4) view2D.xScale = deviceScale * view2DScale view2D.yScale = deviceScale * view2DScale Then change the position of the 2D view to read like this: view2D.position = CGPoint(x:-self.size.width*0.48, y:self.size.height*0.43) and the isometric view positioning to read like this: viewIso.position = CGPoint(x:self.size.width*0, y:self.size.height*0.25) Run your app. Great. We can now see our droid negotiate complex paths all by himself. So proud. Feel free to play around with the level design more, to test the pathfinding to your own satisfaction. Conclusion Congratulations! You’ve just completed part 3 of the ongoing isometric tutorial series. Here is a sample project with all of the code to this point: I’ve got a few things in the works at the moment, so I’m uncertain as to when, or if, I’ll have time to produce part 4. I’ll post any progress or expected delivery dates on Twitter and Facebook, as details come to light. Please follow us there and/or sign up to our newsletter to stay informed. Cheers! 20 thoughts on “Create Your Own Isometric Tile-Based Game: Part 3” Thanks for tutorial part 3. :) Wow! Thank you so much for this tutorial. It is a great intermediate level tutorial for anyone interested in creating an isometric type game. Thanks to the clear instructions, I was able to handle the bugs/changes to 6.3. I highly recommend your tutorial and hope you do more. Could you please expand on how you adapted this to Xcode 6.3, I’ve just updated and now having issues compiling with the protocol changes The tutorial and downloadable project code has now been updated to play nice with xCode 6.3.2. Hoping that Additional parts are forthcoming !!! Great tutorial, I’d love to see this series continued! Hi Dave, I am creator of iTunes App store game called Little Avenger. I’m still learning many things about game development and working on new ideas and projects. Just wanted to say this was a great tutorial and hopefully you find the time to keep adding, VERY helpful! Cheers Hey Dave, first of all, you made a great tutorial series so far! Keep up the good work! i am wondering what software you were using to create these isometric tiles. Are you simply using Adobe Illustrator or maybe something else, that suits this particular job better? Hi Christian, Illustrator would work fine for this approach, as the in-game assets are all 2D bitmaps. Though, you can make your life easier by using 3D software to build the models, then render out to 2D frames. Have a look at 3D Studio Max and Maya. Also Blender is an open source alternative. There’s a bunch of others but that’s somewhere to start if you’re interested. Cheers. Hey Dave, great tutorial, easy to follow and understand! Please keep up this fantastic series. It helped me a lot with my own projects. Thank you a lot!!! Michael Super fun tutorial! Thank you for the time, detail and fun you put into this. This is a very satisfying tutorial! The concepts are presented clearly enough that everything makes sense, even though I don’t know Swift, yet. And you get a nice visual experience as you learn. Thanks for putting this together!!! Is there a way to move the character based on specific coordinates instead of touches. @Akshea – yep, that shouldn’t be difficult. Just use the same code as in the touchesEnded function but when setting your path, use your specific point as the parameter instead of the converted touch coordinate e.g. Great tutorials! Helped me alot. Great job. Hello, I’d like to know if there is any way for building swift apps like this for other operating system and platforms such as Android. Hi Alex, Swift is Apple’s programming language. You could build the equivalent app in Android, you’d just need to translate the code into their programming language/syntax. Hey Dave, what a beautiful example. This is getting me excited as I had thought most of ketchapp’s 3d games were just 2d rendered sprites of the 3d elements. Nice. Could you perhaps tell us when part 4 is coming out? Thank you. Pavan Hi Dave, I noticed an issue with your traversable check, the if statement: Should actually be: Otherwise you will get an index out of bounds error, because you are not checking the max number of Y co-ordinates. Hello, This is Awesome! Do you have an updated version for Swift 3?
http://bigspritegames.com/create-your-own-isometric-tile-based-game-part-3/
CC-MAIN-2018-13
refinedweb
3,845
66.64
Language Interoperability in .Net In this article I have explained What is Language Interoperability and How it works in .Net. With the help of Language Interoperability you can interact with all Other Languages of .Net.With Interoperability you can interact with all the Languages in .Net .We can Save the Time instead of Generating the same Code Again and Again What is .Net ? 1).Net is a Platform Independent Language which is used for developing various applications like Windows Applications,Web Applications ,Mobile Applications . 2).Net is a collection of more than 20 languages.Among them mostly used languages are C#.Net,VB.Net. Though,.Net is collection of several languages Every language has its own compiler. VB.Net uses VB Compilier and C# uses C# compiler for Compilation.Every .Net Language after Compilation generates CIL Code. What is CIL ? CIL is the Common Intermediate Language which is generated after the Compilation of the Code. After Compiling Vb.net code with its VB Compiler we get CIL Code. After compiling C# code with its C# Compiler we get CIL Code. CIL is the Intermediate Code which we get After the Compilation of the Source Code What is Machine Code ? Each Machine whether it is Windows or Unix consists of its own Software and Hardware . It consists of its own MicroProcessor. Every Machine will have some Instructions which can be understood by its own MicroProcessor .Those Instructions are nothing but Machine code(Machine Instructions) What is CLR ? Machine cannot Understand CIL Code which we get after the Compilation of a Program CLR is used for converting the CIL Code into Corresponding Machine Code (Machine Instructions) which can be understand by its own MicroProcessor What is Language Interoperability ? Code which is written in any .NET Language can be consumed in other .NET Language.This is called as Language Interoperability . Every .Net Language after Compiling generates CIL Code which can be used by any other .Net Language i.e within .Net Each Langauage can perform Operations with other language with the help of CIL. This feature of.Net is nothing but Interoperability. For Example , the CIL code which we get after compiling C# Language can be used by VB Language and the CIL Code which we get after compilation of VB code can be used by C# How is this Possible ? Though .Net is a collection of languages ,Every language has its own Data Types.C# has its own Data Types and VB.Net has its own Data Types but their sizes are same .After compilation any .Net language CIL Code will be generated which can be used by other Languages of .Net Examples: After Compiling VB Code, Data Types in that Language will be Converted into CIL Type and it can be used C# and Vice-Versa VB Datatypes converts to CIL which can be used by C# VB --------> CIL --------> C# Integer --> Int32 --------> int Single --> Single --------> float Boolean --> Boolean --------> bool C# Datatypes converts to CIL which can be used by VB C# ----> CIL --------> VB Int --> Int32 --------> integer Float --> Single --------> single Bool --> Boolean --------> boolean Language Interoperability is possible because even though DataTypes names are Different in Different .Net Languages their Sizes are same for all .Net Languages but after Compiling they will generate same CIL Code which can be used by any other .Net Languages Every .Net Language after Compiling Generates CIL Code which can be used in other .Net Langauges ----Proof---- 1)Take a New Project 2)Select Language as Visual Basic 3)Select Class Library Template 4)Save it as VB CODE Why Class Library? The code which is written in Class Library can’t be used for execution but it can used be consumed in other classes.We must build the Class Library so that a dll file will be Generated. Write the following code Public Class Class1 Public Sub hitocsharp() Console.WriteLine("Hi C# ") End Sub Public Function ADD(ByVal x As Integer, ByVal y As Integer) As Integer Return x + y End Function End Class These are important Points Since we cannot execute the code in Class Library, convert it into an Assembly 1)Open Solution Explorer 2)Right Click on Project i.e VBCODE 3)Select Build How to use above VBCODE 1)Right Click on VBCODE 2)Select New Project 3)select Visual C# 3)Name it as CSHARPCODE Right Click on Csharpcode and add a Class csharp Now Add the Reference of VBCODE.dll which we have generated before How to Add Reference files 1)Right Click on the Csharpcode 2)Select Add Reference 3)Browse VBCODE.dll in the VBCODE Project 4)Add VBCODE.dll to your Project To access the VBCode we must add one statement i.e using VBCODE; because Now it is another Project Write the Following Code using System; using System.Collections.Generic; using System.Linq; using System.Text; using VBCODE; namespace CHARPCODE { class Csharp { static void Main() { Class1 obj = new Class1(); obj.hitocsharp(); Console.WriteLine( obj.ADD(1,3)); Console.ReadLine(); } } } Now Execute the Csharp.cs Observe that you will get the Output of the Class Library Like this you can Perform Inter Operations between all the Languages in .Net. Advantage Of Interoperability With Interoperability you can interact with all the Languages in .Net We can Save the Time instead of Generating the same Code Again and Again Note that Language Interoperability can be possible only with the help of CLS(Common Language Specifications) and CLR(Common Language Runtime) of .Net Hi Reddy, Thanks for posting such a nice article about Interopability
https://www.dotnetspider.com/resources/44567-interoperability-net.aspx
CC-MAIN-2019-30
refinedweb
923
56.55
OpenCV Python - set background color I am trying to remove the grayish background from a photo and replace it with white So far I have this code: image = cv2.imread(args["image"]) r = 150.0 / image.shape[1] dim = (150, int(image.shape[0] * r)) resized = cv2.resize(image, dim, interpolation=cv2.INTER_AREA) lower_white = np.array([220, 220, 220], dtype=np.uint8) upper_white = np.array([255, 255, 255], dtype=np.uint8) mask = cv2.inRange(resized, lower_white, upper_white) # could also use threshold res = cv2.bitwise_not(resized, resized, mask) cv2.imshow('res', res) # gives black background The problem is the image now has a black background as I masked out the gray. How to replace empty pixels with white ones? source to share I really recommend you stick with OpenCV, it is well optimized. The trick is to invert the mask and apply it to some background, you will have a masked image and a masked background, then you combine both. image1 is your image masked by the original mask, image2 is the background image masked by the inverted mask, and image3 is the combined image. Important. image1, image2 and image3 must be the same size and type. The mask should be in grayscale. import cv2 import numpy as np # opencv loads the image in BGR, convert it to RGB img = cv2.cvtColor(cv2.imread('E:\\FOTOS\\opencv\\zAJLd.jpg'), cv2.COLOR_BGR2RGB) lower_white = np.array([220, 220, 220], dtype=np.uint8) upper_white = np.array([255, 255, 255], dtype=np.uint8) mask = cv2.inRange(img, lower_white, upper_white) # could also use threshold mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))) # "erase" the small white points in the resulting mask mask = cv2.bitwise_not(mask) # invert mask # load background (could be an image too) bk = np.full(img.shape, 255, dtype=np.uint8) # white bk # get masked foreground fg_masked = cv2.bitwise_and(img, img, mask=mask) # get masked background, mask must be inverted mask = cv2.bitwise_not(mask) bk_masked = cv2.bitwise_and(bk, bk, mask=mask) # combine masked foreground and masked background final = cv2.bitwise_or(fg_masked, bk_masked) mask = cv2.bitwise_not(mask) # revert mask to original source to share First you need to get the background. To this you need to subtract from the original image with the mask image. And then change the black background to white (or any color). And then add the mask image again. Look here OpenCV grabcut () background color and outline in Python source to share Instead of using bitwise_not, I would use resized.setTo([255, 255, 255], mask) Before doing this, I also blur and expand the mask to get rid of the specs in the mask that are part of the image you want to keep. source to share
https://daily-blog.netlify.app/questions/2217002/index.html
CC-MAIN-2021-49
refinedweb
450
70.6
The Plugins module offers items to integrate platform-specific functionality like social networks, analytics or monetization. Plugins are usually imported with the pattern import Felgo 3.0 in your qml source code, e.g.: import Felgo 3.0 If you want to use a plugin on the Build Server you have to indicate which plugins you want to link against. This can be achieved by adding a new key/value pair to your config.json file in your game's qml root directory, e.g.: "plugins": [ "gamecenter", "flurry" ] Please have a look at the individual plugin documentation for the required plugin identifier for your import statement and Build Server value. Voted #1 for:
https://felgo.com/doc/plugins-qmlmodule/
CC-MAIN-2019-47
refinedweb
113
59.6
Hit keys _ Mac - AL Hasan Haj Asaad I would really thank you for this nice simple program in fact I left Mac cause you don’t release any version in Mac’s platform will you release it _ ? second : when keys are pressed over keyboard _ is there any preference which make me listening a sound like I listen to the key hits _ ? - Claudia Frank Hello @AL-Hasan-Haj-Asaad, I’m not quite sure if I understand your question correctly. Do you want a sound to be played while pressing a key? If so, npp doesn’t support this natively. I assume it can be solved but as said, I’m not sure whether I understood your question correctly. Cheers Claudia - AL Hasan Haj Asaad @Claudia-Frank said: assume Yes It is so Sad … Mac platform still a big problem I don’t know why does not a big program such NPP++ support this platform _? - Claudia Frank Hello @AL-Hasan-Haj-Asaad Mac platform still a big problem there are different possibilities to run notepad++ on a mac (not natively but it is possible) A) run npp within a vm -> parallels support windows B) use software like winebottles to run npp on mac I don’t know why does not a big program such NPP++ support this platform _? npp uses windows api heavily and os’s like osx and linux do have different api, so in order to support those os’s you need to either provide three different versions or introduce an additional layer which abstracts each os api with the benefit to slow down npp execution. In regards to your question, is a bit of a hack and works only if char is added. What needs to be done first is described here The script itself import winsound soundfile = "C:\Windows\Media\chimes.wav" def callback_CHARADDED(args): winsound.PlaySound(soundfile, winsound.SND_FILENAME) editor.clearCallbacks([SCINTILLANOTIFICATION.CHARADDED]) editor.callback(callback_CHARADDED, [SCINTILLANOTIFICATION.CHARADDED]) Of course you need to change the path to the soundfile. Cheers Claudia
https://notepad-plus-plus.org/community/topic/11376/hit-keys-_-mac
CC-MAIN-2017-34
refinedweb
341
55.68
User forums > General (but related to Code::Blocks) Problem regenarating numbers the second time around.. (1/1) sider the spider: Hi i created a program regenerating Fibonacci sequence.. The code works well except one detail When the program tries to regenerate a sequence the second time around it does not show the numbers at all.. I do not think I have an issue with my code I suspect it is a code blocks false configuration.. here is my main: #include <iostream> #include <conio.h> #include <stdlib.h> #include "Fibonacci_Library.h" using namespace std; int main() { // Variables unsigned int upToNumber; unsigned int firstNumber = 0, nextNumber = 0; unsigned int secondNumber = 1; char choice = 'y'; while((choice == 'y') || (choice == 'Y')) { // ****************** TITLE ******************* cout << "\n" << "\t\t\t\t\t The Fibonacci Sequence " << "\n" << "\t\t\t\t\t ************************** " << "\n"; cout << "\n" << "\t\t\t This program creates the Fibonacci Sequence up to a number" << "\n" << "\t\t\t\t that is specified by you (The user)." << "\n"; cout << "\n\n" << "Up to what number you want to create the Fibonacci Sequence? "; upToNumber = getNumber(); // *********** Fibonacci Algorithm ************ while (nextNumber <= upToNumber) { cout << nextNumber << endl; nextNumber = firstNumber + secondNumber; secondNumber = firstNumber ; firstNumber = nextNumber; } cout << "Do you want to create the Fibonacci Sequence up to a different number " << "This time? "; choice = getLetter(); while ( (choice != 'y') && (choice != 'Y') && (choice != 'n') && (choice != 'N') ) { cout << "\n" << "Please type \"Yes\" or \"No\" "; choice = getLetter(); } system("cls"); } cout << "\n" << "Press any key to terminate program... "; getch(); return 0; } Alpha: This type of issue cannot be caused by C::B; the issue is within your code. I recommend you step through it with the debugger. The debugger is your friend. If you are unable to find the cause, try asking on a programming forum (this is not the place for generic programming questions). sider the spider: I thank you very much sir, "Alpha". My code had the problem after all. I managed and fixed it. I realized after you said it was my code problem. I am sorry for putting such a question at the wrong place, I am relatively new to c++ and I code out of hobby and just some math curiosities that I have with numbers. Navigation [0] Message IndexGo to full version
http://forums.codeblocks.org/index.php?topic=21759.0;wap2
CC-MAIN-2017-22
refinedweb
370
63.9
import "k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumerestrictions" const ( // ErrReasonDiskConflict is used for NoDiskConflict predicate error. ErrReasonDiskConflict = "node(s) had no available disk" ) Name is the name of the plugin used in the plugin registry and configurations. New initializes a new plugin and returns it. VolumeRestrictions is a plugin that checks volume restrictions. func (pl *VolumeRestrictions) Filter(ctx context.Context, _ *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status Filter invoked at the filter extension point. It evaluates if a pod can fit due to the volumes it requests, and those that are already mounted. If there is already a volume mounted on that node, another pod that uses the same volume can't be scheduled there. This is GCE, Amazon EBS, ISCSI and Ceph RBD specific for now: - GCE PD allows multiple mounts as long as they're all read-only - AWS EBS forbids any two pods mounting the same volume ID - Ceph RBD forbids if any two pods share at least same monitor, and match pool and image, and the image is read-only - ISCSI forbids if any two pods share at least same IQN and ISCSI volume is read-only func (pl *VolumeRestrictions) Name() string Name returns name of the plugin. It is used in logs, etc. Package volumerestrictions imports 4 packages (graph) and is imported by 23 packages. Updated 2020-09-18. Refresh now. Tools for package owners.
https://godoc.org/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumerestrictions
CC-MAIN-2020-40
refinedweb
235
56.96
So try and run this code for me please: def trashy(message, move): message2 = "" for i in message: x = ord(i) message2= message2+chr(x+move) print(message2) numbermove = input("How much do you want to shift your letters by?") something = input("What is your message?") trashy(something, int(numbermove)) Does it work for you? It doesn't for me... All it does is asks me 2 questions and then promptly screws off. So why? I ran it from the run dialog... and pasting it in the console didn't even get it do ask me for input. I have also tried to do the return statement, which looked like this: def trashy(message, move): message2 = "" for i in message: x = ord(i) message2= message2+chr(x+move) return message2 numbermove = input("How much do you want to shift your letters by?") something = input("What is your message?") something2 = trashy(something, int(numbermove)) print(something2) It doesn't work for me, it just sits there when I paste it into the prompt, and running it from the command line only asks for input and promptly exits. I have checked my nvda speech viewer, I have checked my nvda history, but it's like the program crashes before it reaches the print statement. Thing is, the python prompt doesn't complain when I paste in the code. It just sits there, silently, not even asking me for input. So what did I break? How can I fix it? More importantly, how can I keep that from happening again? I can't code and not see the results of my program. What is hard is making code that accepts different and sometimes unexpected types of input and still works. This is what truly takes a large amount of effort on a developer's part.
https://forum.audiogames.net/post/404678/
CC-MAIN-2019-43
refinedweb
302
73.68
Dropping a graph or a portion of a graph drop() hangs in the Gremlin console. Dropping a graph or a portion of a graph such as some vertices can hang based on errors in logged/unlogged or causes error depending on logged/unlogged DSE database batches. A method to resolve the problem is to DROP TABLE or TRUNCATE the underlying DSE database tables storing graph data. The data for a graph is stored in <graph_name>.<vertex_label>_p and <graph_name>.<vertex_label>_e. For example, recipe data stored in a graph food will be food.recipe_p and food.recipe_e. In some cases, additional steps must be taken to delete a graph. A graph consists of three DSE database keyspaces: <graph_name>, <graph_name>_system, and <graph_name>_pvt. All three keyspaces must be deleted, with cqlsh if necessary, to completely delete the graph. cqlshcommands: cqlsh> delete from dse_system.shared_data where dataspace = 'Cluster' and valid_until = 13814000-1dd2-11b2-0000-000000000000 and namespace = 'system' and name = '<graph_name>'; cqlsh> update dse_system.shared_data set last_updated = now() where dataspace = 'Cluster'; Shared_datais not normally manually updated. However, this procedure can be used in the case of a node failure during a graph provisioning operation.
https://docs.datastax.com/en/dse-trblshoot/doc/troubleshooting/graphDropHangs.html
CC-MAIN-2020-29
refinedweb
193
58.69
#include <marsyas/sched/TmTimerManager.h> #include "TmRealTime.h" #include "TmVirtualTime.h" Go to the source code of this file. Definition at line 43 of file TmTimerManager.cpp. struct Make##_NAME : public MakeTimer { \ Make##_NAME() {}; ~Make##_NAME() {}; \ TmTimer* make(std::string ident) { return new _NAME (ident); }; \ } Adding new Timers: New timers are added by first making them... Basically, a map is created from "TimerName"=>TimerConstructorObject. This makes it possible to use a map for fast access to specific timers and it prevents having to instantiate each Timer type. The constructor object simply wraps the new operator so that it constructs objects only when requested. 1. Add the timer's include file. 2. Wrap the object using the macro: TimerCreateWrapper(TmSomeTimerName); 3. Register the timer in the map: in function TmTimerManager::addTimers() add the line registerTimer(TmSomeTimerName); Definition at line 38 of file TmTimerManager.cpp.
http://marsyas.info/doc/sourceDoc/html/TmTimerManager_8cpp.html
CC-MAIN-2018-47
refinedweb
143
51.34
Converting Unicode Strings to 8-bit Strings Fredrik Lundh | January 2006 A Unicode string holds characters from the Unicode character set. If you want an 8-bit string, you need to decide what encoding you want to use. Common encodings are US-ASCII (which is the default if you convert from Unicode to 8-bit strings in Python), ISO-8859-1 (aka Latin-1), and UTF-8 (a variable-width encoding that can represent all Unicode strings). For example, if you want Latin-1 strings, you can use one of: s = u.encode("iso-8859-1") # fail if some character cannot be converted s = u.encode("iso-8859-1", "replace") # instead of failing, replace with ? s = u.encode("iso-8859-1", "ignore") # instead of failing, leave it out If you want an ASCII string, replace “iso-8859-1” above with “ascii” or “us-ascii”. If you want to output the data to a web browser or an XML file, you can use: import cgi s = cgi.escape(u).encode("ascii", "xmlcharrefreplace") The cgi.escape function converts reserved characters (< > and &) to character entities (<, > and &), and the xmlcharrefreplace flag tells the encoder to use character references (&#nn;) for any character that cannot be encoded in the given encoding. The browser (or XML parser) at the other end will convert things back to Unicode. Note that cgi.escape doesn’t escape quotes by default. To use the value in an attribute, you need to pass in an extra flag to escape, and put the result in double quotes: s = 'attr="%s"' % cgi.escape(u,1).encode("ascii", "xmlcharrefreplace") The unaccent.py script shows how to strip off accents from latin characters: import unicodedata, sys CHAR_REPLACEMENT = { # latin-1 characters that don't have a unicode decomposition 0xc6: u"AE", # LATIN CAPITAL LETTER AE 0xd0: u"D", # LATIN CAPITAL LETTER ETH 0xd8: u"OE", # LATIN CAPITAL LETTER O WITH STROKE 0xde: u"Th", # LATIN CAPITAL LETTER THORN 0xdf: u"ss", # LATIN SMALL LETTER SHARP S 0xe6: u"ae", # LATIN SMALL LETTER AE 0xf0: u"d", # LATIN SMALL LETTER ETH 0xf8: u"oe", # LATIN SMALL LETTER O WITH STROKE 0xfe: u"th", # LATIN SMALL LETTER THORN } ## # Translation dictionary. Translation entries are added to this # dictionary as needed. class unaccented_map(dict): ## # Maps a unicode character code (the key) to a replacement code # (either a character code or a unicode string). def mapchar(self, key): ch = self.get(key) if ch is not None: return ch de = unicodedata.decomposition(unichr(key)) if de: try: ch = int(de.split(None, 1)[0], 16) except (IndexError, ValueError): ch = key else: ch = CHAR_REPLACEMENT.get(key, key) self[key] = ch return ch if sys.version >= "2.5": # use __missing__ where available __missing__ = mapchar else: # otherwise, use standard __getitem__ hook (this is slower, # since it's called for each character) __getitem__ = mapchar if __name__ == "__main__": text = u""" "Jo, når'n da ha gått ett stôck te, så kommer'n te e å, å i åa ä e ö." "Vasa", sa'n. "Å i åa ä e ö", sa ja. "Men va i all ti ä dä ni säjer, a, o?", sa'n. "D'ä e å, vett ja", skrek ja, för ja ble rasen, "å i åa ä e ö, hörer han lite, d'ä e å, å i åa ä e ö." "A, o, ö", sa'n å dämmä geck'en. Jo, den va nôe te dum den. (taken from the short story "Dumt fôlk" in Gustaf Fröding's "Räggler å paschaser på våra mål tå en bonne" (1895). """ print text.translate(unaccented_map()) # note that non-letters are passed through as is; you can use # encode("ascii", "ignore") to get rid of them. alternatively, # you can tweak the translation dictionary to return None for # characters >= "\x80". map = unaccented_map() print repr(u"12\xbd inch".translate(map)) print repr(u"12\xbd inch".translate(map).encode("ascii", "ignore")) Comment: ch = CHAR_REPLACEMENT.get(key, key) does not seem to work. What works is ch = CHAR_REPLACEMENT.get(unichr(key), key) but then the CHAR_REPLACEMENT values need to be unicode strings. Posted by ludo (2007-03-29) Comment: Awesome article. This was super useful. Keep up the good work :) Posted by anon (2007-07-12) Comment: 1. I'm not sure if "eth" should be converted into "d" or "dh", and the "capital O with stroke" into "OE" or "Oe", but you as a Scandinavian surely know better. 2. Please don't confine the translation to Latin-1 only. I especially miss the "l with stroke", which is very frequent in Polish. Here is a fragment of my program performing the same task with additional non-decomposable characters that you may consider to add: Posted by Marcin Ciura (2006-11-23)
http://sandbox.effbot.org/zone/unicode-convert.htm
crawl-003
refinedweb
785
73.07
Jaswinder Singh Rajput wrote:> On Mon, Jan 19, 2009 at 5:23 PM, Avi Kivity <avi@redhat.com> wrote:> >> Sam Ravnborg wrote:>> >>>> They are. This bits advertise to userspace what features kvm supports,>>>> both compile- and run-time.>>>>>>>> >>> This is wrong...>>> The headers does not change with the kernel configuration and advertising>>> the>>> kvm features via a .h file like this is simply plain broken.>>>>>> >> Ok. Don't know why I thought unifdef was supplied with the full>> configuration.>>>> >>> You cannot assume that the header files are generated with the exact same>>> config>>> as used by the running kernel.>>>>>> >> This is just for arch specific defines. I'll move these to asm/kvm.h.>>>> >>> And userspace has in no way access to the CONFIG_ namespace which is>>> purely kernel-internal.>>>>>> I cannot see how you have ever seen kcm advertise that for example>>> KVM_CAP_USER_NMI>>> equals to 22 because CONFIG_X86 is never (supposed to be) defined in>>> userspace ->>> except if you did so yourself by some means.>>>>>> >> We did, we ship a hacked-up kvm.h (generated by unifdef) with our userspace.>>>> >> latest -tip is still giving 'make headers_check' warnings:> usr/include/linux/kvm.h:61: leaks CONFIG_X86 to userspace where it is not valid> usr/include/linux/kvm.h:64: leaks CONFIG_X86 to userspace where it is not valid> usr/include/linux/kvm.h:387: leaks CONFIG_X86 to userspace where it> is not valid> usr/include/linux/kvm.h:391: leaks CONFIG_X86 to userspace where it> is not valid> usr/include/linux/kvm.h:396: leaks CONFIG_X86 to userspace where it> is not valid>> So should I resend my patch or you are going to move this stuff> Your patch is broken. I'll push mine shortly.-- error compiling committee.c: too many arguments to function
https://lkml.org/lkml/2009/2/4/260
CC-MAIN-2014-10
refinedweb
296
77.03
Mono WebServices Whilst converting fellow CLUG’er Mark to Mono (on Windows no less!) last night, I decided to revisit my promise to publish my ‘Hello’ web service. Whilst chatting to him, I re-coded it as a Mono ASP.NET Web Service which can be seen here. Writing the Web Service My web service ‘page’ consists of one line: <%@ WebService Language="c#" Class="Schwuk.HelloWorld" %> Whilst my web service ‘logic’ (code behind as usual) consists of: using System; using System.Web.Services; namespace Schwuk { [WebService(Namespace="",Description="Very simple test webservice")] public class HelloWorld : System.Web.Services.WebService { [WebMethod(Description="Says Hello")] public string SayHello(string Name) { return String.Format("Hello {0}", Name); } } } This is compiled with: $ mcs -t:library -r:System.Web.Services -out:bin/HelloWorld.dll HelloWorld.cs Consuming the Web Service The easiest way to consume the web service is to use it’s WSDL file (you can find mine here) to create a proxy client. You’ve got two ways of doing this from Mono/.NET: Since the first option is pretty self explanatory, I’ll quickly demonstrate the second option. $ wsdl -n:Schwuk This generates a HelloWorld.cs file that can be included in your project. I specify a namespace with the -n: option, but that’s optional. Demonstrating the Web Service I re-implemented my original HelloWorld app as HelloWorldClient using this new web service. You can grab the MonoDevelop project here or the client itself in the following forms: Here it is running on Linux: and here running on XP (courtesy of Mark): This is the same executable – no recompilation was required.. 14 Nov 2004 5:17 am I cant get your example to work with xsp.<br /> <br /> I have copied your sourcecode and compiled it successfully using the command you specified.<br /> <br /> I then started xsp specifying the root folder where I have my asmx file and where I put the bin/HelloWorld.dll file.<br /> <br /> When I try to view the page in my browser I get the following error:<br /> System.TypeLoadException: Cannot load type ‘Schwuk.HelloWorld’<br /> <br /> Im quite new in the world of Mono and any help would be greatly appriciated. 14 Nov 2004 5:17 am hello,<br /> I am not very good with English, I describe my problem.<br /> I want to carry out a web service with C# that consents to a db mysql, but when including the class byteFx. it dates. mysqlclient comes out me an error in the compilation, something with respect to that she/he is not the place of the class, next I place the inclusion code using bytefx.data.mysqlclient.<br /> this it is my problem, also like I can consume the web service from an application windows made in C #you remember that the web service this in linux running low xsp <br /> thank you for your help
http://m.schwuk.com/articles/2004/11/14/mono-webservices/comment-page-1
crawl-002
refinedweb
484
63.7
There isn’t much insight into the execution of a map reduce script in MongoDB, but I’ve found three techniques to help. Of course the preferred technique for map reduce is to use declarative aggregation operators, but there are some problems that naturally lend themselves to copious amounts of imperative code. That’s the kind of debugging i needed to do recently. In a Mongo script you can use print and printjson to send strings and objects into standard output. During a map reduce these functions don’t produce output on stdout, unfortunately, but the output will appear in the log file if the verbosity is set high enough. Starting mongod with a –vvvv flag works for me. Log file output can be useful in some situations, but in general, digging through a log file created in high verbosity mode is difficult. The best way I’ve found to debug map reduce scripts running inside Mongo is to attach logging data directly to the output of the map and reduce functions. To debug map functions, this means you’ll emit an object that might have an array attached, like the following. { "name": "Scott", "total" : 15, "events" : [ "Step A worked", "Flag B is false", "More debugging here" ] } Inside the map function you can push debugging strings and objects into the events array. Of course the reduce function will have to preserve this debugging information, possibly by aggregating the arrays. However, if you are debugging the map function I’d suggest simplifying the process by not reducing and simply letting emitted objects pass through to the output collection. Another technique to do this is to emit using a key of new ObjectId, so each emitted object is in its own bucket. As an aside, my favorite tool for poking around in Mongo data is Robomongo (works on OSX and Windows). Robomongo is shell oriented, so you can use all the Mongo commands you already know and love. Robomongo’s one shortcoming is in trying to edit system stored JavaScript. For that task I use MongoVue (Windows only, requires a license to unlock some features). By far the best debugging experience is to move a map or reduce function, along with some data, into a browser. The browser has extremely capable debugging tools where you can step through code and inspect variables, but there are a few things you’ll need to do in preparation. 1. Define any globals that the map function needs. At a minimum, this would be an emit function, which might be as simple as the following. var result = null; window.emit = function(id, value) { result = value; }; 2. Have a plan to manage ObjectId types on the client. With the C# driver I use the following ActionResult derived class to get raw documents to the browser with custom JSON settings to transform ObjectId fields into legal JSON. public class BsonResult : ActionResult { private readonly BsonDocument _document; public BsonResult(BsonDocument document) { _document = document; } public override void ExecuteResult(ControllerContext context) { var settings = new JsonWriterSettings(); settings.OutputMode = JsonOutputMode.Strict; var response = context.HttpContext.Response; response.ContentType = "application/json"; response.Write(_document.ToJson(settings)); } } Note that using JsonOutputMode.Strict will give you a string that a browser can parse using JSON.parse, but it will change fields of type ObjectId into full fledged objects ({ “$oid”: “5fac…ffff”}). This behavior will create a problem if the map script ever tries to compare ObjectId fields by value (object1.id === object2.id will always be false). If the ObjectId creates a problem, the best plan, I think, is to walk through the document using reflection in the browser and change the fields into simple strings with the value of the ID. Hope that helps!
https://odetocode.com/blogs/scott/archive/2015/02/11/debugging-map-reduce-in-mongodb.aspx
CC-MAIN-2021-21
refinedweb
618
54.32
This post describes a problem I ran into when converting a test project from .NET Framework to .NET Core. The test project was a console app, so that specific tests could easily be run from the command line, as well as using the normal xUnit console runner. Unfortunately, after converting the project to .NET Core, the project would no longer compile, giving the error: CS0017 Program has more than one entry point defined. Compile with /main to specify the type that contains the entry point. This post digs into the root cause of the error, why it manifests, and how to fix it. The issue and solution are described in this GitHub issue. tl;dr; Add <GenerateProgramFile>false</GenerateProgramFile>inside a <PropertyGroup>element in your test project's .csproj file. The problematic test project The configuration I ran into probably isn't that common, but I have seen it used in a few places. Essentially, you have a console application that contains xUnit (or some other testing framework like MSTest) tests. You can then easily run certain tests from the command line, without having to use the specific xUnit or MSTest test runner/harness. For example, you might have some key integration tests that you want to be able to run on some machine that doesn't have the required unit testing runners installed. By using the console-app approach, you can simply call the test methods in the program's static void main method. Alternatively, you may want to include xUnit tests as part of your real app. Consider the following example project. It consists of a single "integration" test in the CriticialTests class, and a Program.cs that runs the test on startup. The test might look something like: public class CriticalTests { [Fact] public void MyIntegrationTest() { // Do something Console.WriteLine("Testing complete"); } } The Program.cs file simply creates an instance of this class, and invokes the MyIntegrationTest method directly: public class Program { public static void Main(string[] args) { new CriticalTests().MyIntegrationTest(); } } Unfortunately, this project won't compile. Instead, you'll get this error: CS0017 Program has more than one entry point defined. Compile with /main to specify the type that contains the entry point. Why is there a problem? On the face of it, this doesn't make sense. The dotnet test documentation states: Unit tests are console application projects… so if they're console applications, surely they should have a static void main right? Why are test project's console projects? The detail of why a test project is a console application is a little subtle; it's not immediately obvious from looking at the .csproj project file. For example, consider the following project file. This is for a .NET Core class library project, created using the "SDK style" .csproj file. There's not much to it, just the Sdk attribute and the TargetFramework (which is .NET Core 2.0): <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netcoreapp2.0</TargetFramework> </PropertyGroup> </Project> Now lets look at a .NET Core console project's .csproj. <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netcoreapp2.0</TargetFramework> <OutputType>Exe</OutputType> </PropertyGroup> </Project> This is almost identical, the only difference is the <OutputType> element, which tells MSBuild we're producing a console app instead of a library project. Finally, lets look at a .NET Core (xUnit) test project's .3.1" /> <PackageReference Include="xunit.runner.visualstudio" Version="2.3.1" /> </ItemGroup> </Project> This project has a bit more to it, but the notable features are - The Sdkis the same as the library and console projects - It has an <IsPackable>project, so calling dotnet packon the solution won't try and create a NuGet for this project - It has three NuGet packages for the .NET Test SDK, xUnit, and the xUnit adapater for dotnet test - It doesn't have an <OutputType>of Exe The interesting point is that last bullet. I (and the documentation) stated that a test project is a console app, so why doesn't it have an <OutputType>? The secret, is that the Microsoft.NET.Test.Sdk NuGet package is injecting the <OutputType> element when you build your project. It does this by including a .targets file in the package, which runs automatically when your project builds. If you want to see for yourself, open the Microsoft.Net.Test.Sdk.targets file from the NuGet package (e.g. at %USERPROFILE%\.nuget\packages\microsoft.net.test.sdk\15.3.0\build\netcoreapp1.0) Alternatively, you can view the file on NuGet. The important part is: <PropertyGroup Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp'"> <OutputType>Exe</OutputType> </PropertyGroup> So if the project is a .NET Core project, it adds the <OutputType>. That explains why the project is a console app, but it doesn't explain why we're getting a build error… It also doesn't explain why the test SDK needs to do this in the first place, but I don't have the answer to that one. This comment by Brad Wilson suggests it's not actually required, and is there for legacy reasons more than anything else. Why is there a build error? The build error is actually a consequence of forcing the project to a console app. If you take a library project and simply add the <OutputType>Exe</OutputType> element to it, you'll get the following error instead: CS5001 Program does not contain a static 'Main' method suitable for an entry point A console app needs an "entry point" i.e. a method to run when the app is starting. So to convert a library project to a console app you must also add a Program class with a static void Main method. Can you see the problem with that, given the Microsoft.Net.Test.Sdk <OutputType> behaviour? If adding the Microsoft.Net.Test.Sdk package to a library project silently converted it to a console app, then you'd get a build error by default. You'd be forced to add a static void Main, even if you only ever wanted to run the app using dotnet test or Visual Studio's Test Explorer. To get round this, the SDK package automatically generates a Program file for you if you're running on .NET Core. This ensures the build doesn't break when you add the package to a class library. You can see the MSBuild target that does this in the .targets file for the package. It creates a Program file using the correct language (VB or C#) and compiles it into your test project. Which leads us back to the original error. If you already have a Program file in your project, the compiler doesn't know which file to choose as the entry point for your app, hence the message: CS0017 Program has more than one entry point defined. Compile with /main to specify the type that contains the entry point. So now we know exactly what's happening, we can fix it. The solution The error message gives you a hint as to how to fix it, Compile with /main, but the compiler is assuming you actually want both Program classes. In reality, we don't need the auto generated one at all, as we have our own. Luckily, the Microsoft.Net.Test.Sdk.targets file uses a property to determine whether it should generate the file: <GenerateProgramFile Condition="'$(GenerateProgramFile)' == ''">true</GenerateProgramFile> This defines a property called $(GenerateProgramFile), and sets its value to true as long as it doesn't already have a value. We can use that condition to override the property's value to false in our test csproj file, by adding <GenerateProgramFile>false</GenerateProgramFile> to a PropertyGroup. For example: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netcoreapp2.0</TargetFramework> <IsPackable>false</IsPackable> <GenerateProgramFile>false</GenerateProgramFile> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.3.0" /> <PackageReference Include="xunit" Version="2.3.1" /> <PackageReference Include="xunit.runner.visualstudio" Version="2.3.1" /> </ItemGroup> </Project> With the property added, we can build and run our application both using dotnet test, or by simply running the console app directly. Summary The Microsoft.Net.Test.Sdk NuGet package required for testing with the dotnet test framework includes an MSBuild .targets file that adds an <OutputType>Exe</OutputType> property to your test project, and automatically generates a Program file. If your test project is already a console application, or includes a Program class with a static void main method, then you must disable the auto-generation of the program file. Add the following element to your test project's .csproj, inside a <PropertyGroup> element: <GenerateProgramFile>false</GenerateProgramFile>
https://andrewlock.net/fixing-the-error-program-has-more-than-one-entry-point-defined-for-console-apps-containing-xunit-tests/
CC-MAIN-2020-50
refinedweb
1,432
57.77
Content-type: text/html #include <xti.h> int t_error(const char *errmsg);_error() function produces a message on the standard error output which describes the last error encountered during a call to a transport function. The argument string errmsg is a user-supplied error message that gives context to the error. The error message is written as follows: first (if errmsg is not a null pointer and the character pointed to be errmsg is not the null character) the string pointed to by errmsg followed by a colon and a space; then a standard error message string for the current error defined in t_errno. If t_errno has a value different from TSYSERR, the standard error message string is followed by a newline character. If, however, t_errno is equal to TSYSERR, the t_errno string is followed by the standard error message string for the current error defined in errno followed by a newline. The language for error message strings written by t_error() is that of the current locale. If it is English, the error message string describing the value in t_errno may be derived from the comments following the t_errno codes defined in xti.h. The contents of the error message strings describing the value in errno are the same as those returned by the strerror(3C) function with an argument of errno. The error number, t_errno, is only set when an error occurs and it is not cleared on successful calls. If a t_connect(3NSL) function fails on transport endpoint fd2 because a bad address was given, the following call might follow the failure: t_error("t_connect failed on fd2"); The diagnostic message to be printed would look like: t_connect failed on fd2: incorrect addr format where incorrect addr format identifies the specific error that occurred, and t_connect failed on fd2 tells the user which function failed on which transport endpoint. Upon completion, a value of 0 is returned. All - apart from T_UNINIT No errors are defined for the t_error() that can be set by the XTI interface and cannot be set by the TLI interface is: TPROTO See attributes(5) for descriptions of the following attributes: t_errno(3NSL)strerror(3C), attributes(5)
http://backdrift.org/man/SunOS-5.10/man3nsl/t_error.3nsl.html
CC-MAIN-2017-09
refinedweb
362
52.83