I don't understand the use-case for an programming environment where the program looks like a text file, but I need to click on a while loop and then click on the "condition" field in the details column to edit the condition code with my keyboard, rather than just select the condition code in the main view and type there.
I guess the program in the main view is always syntactically correct? But I can still have a syntax error in my condition, so on what dimensions is this a net improvement vs just... typing the code?
I am a big fan of flow-based visual programming for signal processing domains (like GNU Radio), music and generative art (like Max/MSP). In those domains where there's a lot of data flowing through confusing but pure transformations, and having a GUI and being able to effortless inspect intermediary products and creatively try different pipelines of patches can be much more convenient than text programming.
But if you're just putting a lot of extra steps around a typical looking procedural scripting language, what's the win?
MahmoudFayed 1 days ago [-]
>> "I don't understand the use-case for an programming environment where the program looks like a text file, but I need to click on a while loop and then click on the "condition" field in the details column to edit the condition code with my keyboard, rather than just select the condition code in the main view and type there"
In PWCT2, just click the (Start Point) and press CTRL+L and all of the interaction pages will be opened to quickly modify the input for any component.
PWCT2 is designed to be used in different ways based on the context.
1- If you like writing code, just write code, and PWCT2 will convert it to a visual representation.
2- If you want to explore the environment and learn about the visual components, you can use the mouse to discover and use any component.
3- If you know the component name, just use keyboard shortcuts to create programs quickly.
4- Using the Time Machine and playing programs as a movie, you can read large programs without touching your mouse or keyboard. Just watch while drinking your coffee.
5- The advantages of interaction pages (data-entry forms) become apparent when using large components, such as those representing GUI classes.
6- Using the steps tree, we have drag-and-drop functionality, allowing us to quickly organize the logic of our programs.
smokel 3 days ago [-]
This particular implementation doesn't strike me as extremely useful, but stepping away form text based files opens up quite a lot of possibilities.
For one, identifiers could be replaced by UUIDs, making some refactoring operations trivial. And as someone else points out, the system could reject syntactical errors at an early stage. Smalltalk implementations such as Squeak [1] show a lot of potential. Unfortunately, the programming ecosystem at large seems to be quite conservative.
>> "This particular implementation doesn't strike me as extremely useful, but stepping away form text based files opens up quite a lot of possibilities"
The visual representation could be improved and made easier to customize over time. What truly matters is the concept and the interaction approach, which enable the creation of a general-purpose VPL that can be used for any programming task.
PittleyDunkin 4 days ago [-]
> But I can still have a syntax error in my condition, so on what dimensions is this a net improvement vs just... typing the code?
To me this is a massive win. Structural editing solves so many problems we face as coders. Imagine not being able to save a file with invalid syntax! Imagine if changes were defined in structural or semantic terms rather than through text! The man-hours saved are mind-boggling to even consider.
Anyway intellij + java is already pretty close to this and java syntax is certainly not LESS ugly of an interface. I really just want my editor to not allow persisting anything but a coherent program....
4 days ago [-]
andylynch 4 days ago [-]
Specifically for Java (though earlier versions were LISP) - I had the pleasure or using SeeBeyond‘ eGate , which did almost exactly this.
It was a horrible, slow ordeal.
IntelliJ’s structural search / editing is very powerful, but also arcane enough that few people I know actually use it.
PittleyDunkin 4 days ago [-]
> It was a horrible, slow ordeal.
So is editing text. When you manage stacks of text (e.g. a VCS) text becomes even more of a nightmare.
RestartKernel 2 days ago [-]
I get what you're saying, but syntax is the least of my concerns. With a modern IDE, or even just a decent language server, I run into syntax error very, very rarely. Everything else is the hard part.
hnlmorg 4 days ago [-]
I really want to like this because you can tell a lot of love has gone into it and visual programming environments is an area that needs a lot of love. But sadly this feels like it’s missed the mark by some margin.
Watching the videos and looking at the IDE, I’d say it’s exactly like coding compared to a lot of the other Visual Programming languages I’ve seen and used.
If I had to describe this, I’d say it’s a mouse-driven IDE with a YAML-like code. But in its current guise, I can see people itching to use the keyboard for speed once they know what text pieces they want.
Whereas some of the other visual programming languages, and their accompanying IDEs, aim to represent program logic in a way that is graphical rather than textual. Which is I think you would normally make the “no code” distinction.
szvsw 4 days ago [-]
While I agree with 99% of what you said…
> I’d say it’s a mouse-driven IDE with a YAML-like code. But in its current guise, I can see people itching to use the keyboard for speed once they know what text pieces they want.
It’s useful to keep in mind that a “mouse-driven” IDE could have some real benefits from an accessibility perspective. I agree that this generally might miss the mark from a variety of goals, especially compared to graph based dataflow programming languages (see my other comment in this thread), and to really maximize accessibility as a goal you would need to actually design it for that from the ground up, but it’s still good to see experiments with other modalities and raise the question. At the end of the day the fraction of programmers who have difficulty typing is low… but that doesn’t mean it should be ignored, especially considering that one day almost all of us on here will have trouble typing, but likely still have the urge to code!
hnlmorg 4 days ago [-]
Accessibility is a good point. Though I can’t think of any disabilities where I couldn’t type but could drag and drop components, which is a much more precise as well as sustained physical motion.
I suspect there are better ways for coding in those scenarios. Like perhaps the kind of word entry that Stephen Hawkin used.
Or if you were to go totally touch UI (I’ve known one individual with motor
disabilities use an old touch screen CRT with a stylus in their mouth) then you’d want those blocks a little taller but with something more akin to Miro’s infinite zoom. Rather than a text UI with mouse input.
That said, every project has to start somewhere. So this might be that proof of concept needed to fully flesh out accessibility tweaks.
szvsw 4 days ago [-]
Yeah good points. You can easily imagine composing graph based/data flow programming languages with just your mouth, ie “create node:<type>:<name>; connect node:<name>:<outlet> to node:<name>:<inlet>” as a relatively easy way to synchronize a visual programming language’s traditional mouse-based construction engine to speech based construction with realtime visual updates etc.
smaudet 4 days ago [-]
As I said in another comment, I think the part of visual languages their devs are blind to (pun intended) are the input method(s).
We can all agree that keyboards are a) large b) necessitate being able to type (need hands) but we don't have that many high-frequency out-of-the-way input methods in computing, generally. I want to see an editor that can a) re-use existing code (libraries, concepts, etc) but that can b) use mouse, keyboard, touchscreen, hand gloves, <insert new input method here>.
Drawing boxes on screens and sticking text inside them is just not impressive, and its clunky to boot. Any worth is whatever input method is provided, and TBH I wouldn't expect this to be any easier on e.g. mobile, probably worse, in fact.
szvsw 4 days ago [-]
Yeah, the high frequency input capabilities of hands/fingers on keyboard are pretty remarkable and hard to beat, especially in conjunction with navigation tools like vim and especially when you start considering things like stenography applied in a programming context (intellisense/copilot etc can in some ways be be considered a form of that already).
I suspect natural language speech is too slow, but i do sometimes wonder if the only other part of our body capable of such high throughput (and the requisite fine motor control) is our mouth/tongue… I’ve always wondered what a programming language optimized for spoken input would look/sound like.
Edit: mentioned this in another comment, but I think graph based visual programming languages would actually be pretty well suited to speech entry, especially with shorthands for common operations like “create node” := “kurr” or “connect” := “kuh”.
Typing speed seems like a rather junior concern. In practice most time tends to be spent thinking or cycling through edits and inspecting test results. I think putting your tests in a watch -n 2 on the second screen would be a much better improvement than fiddling with some reinvention of stenography.
szvsw 4 days ago [-]
I agree that typing probably is not the limiting factor in most cases and for most people, and so the conversation might seem a little silly… But that’s precisely because typing is fast. Once you know what you are doing, you start typing and you are done relatively quickly, and probably spent a lot of the time typing thinking about what you would do next… but we are specifically talking about a scenario where typing is the limiting factor, or even entirely infeasible, for a variety of reasons.
smaudet 4 days ago [-]
This is still not helped by other input methods, stenography helps with the navigation too... Ctrl clicking though symbols is all well and good but you often want or need to do much more complicated things than that.
So, yes, input speed is still a problem for reading and understanding code, too.
techwiz137 4 days ago [-]
I think given the advent of AI, there is less of a need of such a language. All I need is to type out a piece of concise text to AI of choice and I can have a general GUI up and running without having specifically much knowledge of a programming language.
MahmoudFayed 1 days ago [-]
We think differently. In PWCT2, we believe visual programming is more attractive in the age of AI and large language models (LLMs).
Maybe I'm missing something obvious, but this reminds me of scratch (+ visual basic ish), which I would consider "coding".
Whether you type i-f or drag in "if" - it's still coding, right?
MahmoudFayed 4 days ago [-]
The design and programming concepts are related to programming, which is part of the software name Programming Without Coding Technology (PWCT). While coding involves writing textual code directly, Visual Programming Languages (VPLs) like Scratch and PWCT still require knowledge of programming concepts and problem-solving. However, they simplify the implementation process by reducing syntax errors and providing a graphical user interface (GUI) that can be in any human language. Visual components increase the level of abstraction, potentially reducing development time.
6.1 Graphical Code Replacement (GCR) method instead of drag and drop
6.2 PWCT1 - The Time Machine (Run programs in the past & Play programs as a movie)
PWCT2 introduces more features compared to PWCT1, such as importing Ring code, inserting steps, and more.
smaudet 4 days ago [-]
> potentially reducing development time
See this is where this falls over.
Syntax highlighting + autocomplete already handle this, and where this actually works against your language, you have the problem every GUI has, inflexibility.
A "coding" language can evolve, structures change, be expressive or verbose. Much like a language.
I doubt your UI can benefit from these abilities, whereas an IDE already provides visual cues etc. that reduce this development time, while preserving the expressiveness of the language.
So, it probably does not reduce the development time, because the tools already solved this problem, and you just have a more clunky verbose, harder to develop language instead.
And the types that aren't coding in the first place are still going to struggle with things like edge cases, etc., so this isn't really democratizing anything either.
WillAdams 4 days ago [-]
Yes, but this eliminates syntax errors, and since it works well for folks who have difficulties w/ keyboards/are using styluses is something a bit different from the traditional view of programming as textual instructions.
It would be interesting if it were more graphical/symbolic, and unlike Scratch (or its derivative Blockly) there aren't any visual cues as regards enforced syntax/structure.
The ironic thing is that as visual programming languages become sufficiently robust, the necessity of typing quickly returns.
Blueprint in Unreal is the best example where you can try to dig through a tree of thousands of nodes (where hierarchies inherently start to become counter intuitive in at least some cases) or you can just start typing the node names.
And then once you start also specifying the parameters you end up, more or less, typing out the calls anyhow - but with the downside that visual programming rapidly becomes inscrutible at scale with size + flow control/branching.
In terms of adding more graphical/symbolic elements, we will address this in PWCT3. Additionally, publishing the PWCT2 source code will enable developers to extend the software in various directions based on their vision.
smaudet 4 days ago [-]
> who have difficulties w/ keyboards/are using styluses is something a bit different from the traditional view
This part would be interesting, I perennially wish I could code on my tablet, but the IDEs there are just too awkward.
The big issue is, I still want a touch-type interface (like a keyboard) so that I can rapidly access many different concepts, but the Android UI is just too constrained by me having to peck slowly one key at a time.
I.e. the problem (there) is not the code, it is the input method. What I really need is the ability to access my idioms with multiple fingers, but without smudging my feedback loop (the mobile screen).
Perhaps some method of inputting code as symbols with a stylus could work better? Although it might still be very awkward...
cess11 4 days ago [-]
If you think this looks kind of nice but want to type in the code yourself, here's a couple of alternatives that might fit.
>After 8 years of development and delivering it to thousands of users, today I am open sourcing my visual programming language.
The discussion there is worth reviewing as well.
nimish 4 days ago [-]
Alice did this 20 years ago? Smalltalk did this 50 years ago?
What's new and cool about this one? I'm jaded and cynical. I want to be proven wrong.
MahmoudFayed 1 days ago [-]
PWCT (Programming Without Coding Technology) is designed as a general-purpose visual programming language and is used for developing the Ring programming language Compiler/VM. (Research paper: https://www.mdpi.com/2079-9292/13/23/4627)
With respect to Visual Programming Languages, the main contributions of PWCT are:
2- Using the Time dimension at the program design level, which allows running programs in the past and playing programs as a movie.
3- The first VPL to be used in the development of a Compiler and Virtual Machine for a TPL.
PWCT is influenced by Lava, Forms/3, and Limnor. PWCT2 incorporates more features from other VPLs like Scratch and Envision.
PWCT2 improves upon PWCT by providing a faster, cross-platform environment that supports importing and exporting Ring code. Additionally, the implementation of the environment has been switched from Visual FoxPro to the Ring language. Since PWCT2 supports importing Ring code and is written in Ring, it is a self-hosting VPL. Also, PWCT2 adds the auto-run feature to the Time dimension.
JaDogg 4 days ago [-]
Good stuff, I like how individual tokens have colours in the tree view. it seems like a visual syntax tree.
MahmoudFayed 4 days ago [-]
Thanks for your kind words
WillAdams 4 days ago [-]
How are syntax/structure enforced? Scratch/Blockly uses shapes to ensure things work, while node editors will disallow invalid connections.
MahmoudFayed 4 days ago [-]
In PWCT1 we provide two modes
1. (Free Editor + VPL Compiler) which allows syntax errors and can detect them
2. (Syntax Directed Editor) which prevent errors.
In PWCT2 at the current stage we provide (Free Editor), The (VPL Compiler) will be added to detect errors and the SDE will be added to prevent them as we did in PWCT1.
szvsw 4 days ago [-]
Just going to highlight some of my favorite graph/flow based programming environments:
- Max/MSP - originally for audio/visual synthesis, but widely used for generative art (in the meaning of generative that existed before the AI boom of the 2020s)
- Grasshopper & Dynamo : structural/geometry/architectural generative design
- Modelica/Dymola/OpenModelica etc: symbolic equation modeling and diffeq solving, widely used in HVAC/systems modeling, automotive, aeronautical design
- Modular synthesizers - analog audio synthesis (and these days, digital as well!)
It’s interesting to see that almost all of the examples mentioned above leverage the fact that they are abstractions on top of lower level programming frameworks which are tailored to specific tasks in order to make fantastic, intuitive interfaces that are easy to learn and quick to iterate incredibly complex designs within their domains, while the example linked here is not really domain-specific and is much closer to traditional programming (or something like Scratch)
jitl 4 days ago [-]
This Youtube channel "Ussa Design" has some really neat process videos of using Rhino3D + Grasshopper parametric design approach to build a whole bunch of different stuff. But, not much in the way of explanation or tutorial, mostly just a watch-along and chill out kind of vibe.
I really like the idea of parametric CAD design via flow programming - you can go back and change any parameter later. Versus typical 3D modeling or CAD workflows where you basically are working with digital versions of typical shop tools like lathes, routers, extruders, etc where a fillet or cut you made 10 hours ago can't be changed without "rebuilding by hand".
There are many many many dedicated grasshopper modeling channels, both in the “watch and chill” style and the explanatory style. Once you’ve done lots of grasshopper programming and can recognize component icons from both the base library and the cornucopia of grasshopper plugins, you can skip through all of those videos at like 3x speed and very rapidly absorb the graph structure.
One of the most interesting parts of grasshopper is how components operate over tree based data structures - ie the equivalent of numpy broadcasting.
jitl 4 days ago [-]
Awesome! I had no idea, for some reason I just assumed this niche topic no one was interested in. Do you have any recommendations for how to learn grasshopper for a casual like me who only feels comfortable in SketchUp?
szvsw 4 days ago [-]
It really really depends on what your goals are and what type of design domain you are working in, eg jewelry, architecture (and what within architecture), fabrication, experimental sculpture, etc.
At the end of the day though… play. Lots of play! Playing within Grasshopper and just making crazy things is a great way to build intuition.
at the same time, it of course is very helpful to have directional play, ie problem solving, frequently in the form of trying to recreate - or parametrize/generalize/abstract - a specific pre-existing design.
It also helps a lot if you are pretty familiar with modeling in rhino (ie working with construction planes, curves, etc) since a lot of grasshopper worker is just a matter of formally capturing a sequence of operations that you might perform in Rhino into a codified DAG. Unfortunately some of the naming of rhino operations does not align with their corresponding grasshopper components (and the corresponding components might not perfectly align their api with the corresponding rhino commands) but developing that geometric intuition is essential.
There’s also some stuff that is ultimately more efficient to program within python / CSharp / Visual Basic within grasshopper so always keep that in mind… but it can become another layer of complexity (or simplification…) to learn at the same time.
How about sticking wires into a breadboard socket to create ones and zeros: that must also not be coding since it's just hooking up a graph.
nativeit 4 days ago [-]
I think in this context, it’s attempting to draw a distinction between “programming” and “coding”. Reasonable minds can disagree with whether or not this is successful (or even all that meaningful).
kazinator 4 days ago [-]
The idea that a programmer can work on an executable form of the solution, but not be coding, disappeared decades and decades ago and is probably not worth resurrecting.
The term "coding" in computing originally referred to encoding a program in the language of the machine: "machine code". Machine code was probably called that because it is cryptic, not allowing calculations to be specified in ordinary notation and in a machine-independent way.
Programming was the overall activity of designing a program, which could have involved planning it on paper; coding was the manual translation of the design into machine code.
Early programs that translated higher level statements into machine coders were called "automatic coders" and not "compilers", because they helped with the step that was already called "coding". The idea was that when you prepare input for the "automatic coder", in the higher level language like Fortran or whatever, you are programming, but not yet coding: the automatic coder is the thing that is coding for you; i.e. making machine code.
(The term "compiling" also existed, but it had a meaning similar to the dictionary meaning: something like accumulating multiple library functions into a single image. I think, something similar to producing a .a archive from .o files. The way we use "compiling" and "compiler" today doesn't make sense in regard to the ordinary meaning of the word.)
Eventually, writing in a high level language became "coding". The idea is that as soon as you begin steps which communicate your solution to the computer, which the computer will take as-is and execute, you're coding: you're encoding the abstract idea into a form which the machine understands, whether that be assembly language or Prolog.
Those aspects of programming that are not coding have to do with clarifying requirements and planning the solution (and, later, testing and debugging). Once you have clarified the requirements and planned the solution, and have started using a GUI to hook up functional blocks to make it happen, you are coding the solution.
There is no need to go back to the old terminology whereby you are programming the solution, and the "automatic coder" then takes your graph and turns it into computer code. Not even if we think that the terminology was good. It was good for "compiling" to just refer to sticking things into a bundle, rather than translating in a complex way, but we are not going back to that, either.
We will just have to see how the language develops. Maybe developers will prefer not to call any graphical manipulation of functional blocks "coding". Especially if it's in a paradigm where the graph is translated into code that they often look at and perhaps edit, because you then might want the word "code" to unambiguously refer to that code.
teddyh 4 days ago [-]
When bugs too easily derange or mung the programs of machines;
When programs too "intelligent" start taking over the machines:
Is this the end of AutoProg?
— The HACTRN
4 days ago [-]
daft_pink 4 days ago [-]
[flagged]
MahmoudFayed 4 days ago [-]
PWCT2 supports the importation of textual code written in Ring. This allows us to leverage large language models to write the code and then import it. Here is a video demonstration: https://www.youtube.com/watch?v=Fx--dNZvncc
daft_pink 4 days ago [-]
Sorry, it was just a joke.
anonzzzies 4 days ago [-]
Still a good answer from OP.
rad_gruchalski 4 days ago [-]
Use DALL-E.
4 days ago [-]
Rendered at 23:36:09 GMT+0000 (UTC) with Wasmer Edge.
I guess the program in the main view is always syntactically correct? But I can still have a syntax error in my condition, so on what dimensions is this a net improvement vs just... typing the code?
I am a big fan of flow-based visual programming for signal processing domains (like GNU Radio), music and generative art (like Max/MSP). In those domains where there's a lot of data flowing through confusing but pure transformations, and having a GUI and being able to effortless inspect intermediary products and creatively try different pipelines of patches can be much more convenient than text programming.
But if you're just putting a lot of extra steps around a typical looking procedural scripting language, what's the win?
In PWCT2, just click the (Start Point) and press CTRL+L and all of the interaction pages will be opened to quickly modify the input for any component.
PWCT2 is designed to be used in different ways based on the context.
1- If you like writing code, just write code, and PWCT2 will convert it to a visual representation.
2- If you want to explore the environment and learn about the visual components, you can use the mouse to discover and use any component.
3- If you know the component name, just use keyboard shortcuts to create programs quickly.
4- Using the Time Machine and playing programs as a movie, you can read large programs without touching your mouse or keyboard. Just watch while drinking your coffee.
5- The advantages of interaction pages (data-entry forms) become apparent when using large components, such as those representing GUI classes.
6- Using the steps tree, we have drag-and-drop functionality, allowing us to quickly organize the logic of our programs.
For one, identifiers could be replaced by UUIDs, making some refactoring operations trivial. And as someone else points out, the system could reject syntactical errors at an early stage. Smalltalk implementations such as Squeak [1] show a lot of potential. Unfortunately, the programming ecosystem at large seems to be quite conservative.
[1] https://squeak.org/
The visual representation could be improved and made easier to customize over time. What truly matters is the concept and the interaction approach, which enable the creation of a general-purpose VPL that can be used for any programming task.
To me this is a massive win. Structural editing solves so many problems we face as coders. Imagine not being able to save a file with invalid syntax! Imagine if changes were defined in structural or semantic terms rather than through text! The man-hours saved are mind-boggling to even consider.
Anyway intellij + java is already pretty close to this and java syntax is certainly not LESS ugly of an interface. I really just want my editor to not allow persisting anything but a coherent program....
It was a horrible, slow ordeal.
IntelliJ’s structural search / editing is very powerful, but also arcane enough that few people I know actually use it.
So is editing text. When you manage stacks of text (e.g. a VCS) text becomes even more of a nightmare.
Watching the videos and looking at the IDE, I’d say it’s exactly like coding compared to a lot of the other Visual Programming languages I’ve seen and used.
If I had to describe this, I’d say it’s a mouse-driven IDE with a YAML-like code. But in its current guise, I can see people itching to use the keyboard for speed once they know what text pieces they want.
Whereas some of the other visual programming languages, and their accompanying IDEs, aim to represent program logic in a way that is graphical rather than textual. Which is I think you would normally make the “no code” distinction.
> I’d say it’s a mouse-driven IDE with a YAML-like code. But in its current guise, I can see people itching to use the keyboard for speed once they know what text pieces they want.
It’s useful to keep in mind that a “mouse-driven” IDE could have some real benefits from an accessibility perspective. I agree that this generally might miss the mark from a variety of goals, especially compared to graph based dataflow programming languages (see my other comment in this thread), and to really maximize accessibility as a goal you would need to actually design it for that from the ground up, but it’s still good to see experiments with other modalities and raise the question. At the end of the day the fraction of programmers who have difficulty typing is low… but that doesn’t mean it should be ignored, especially considering that one day almost all of us on here will have trouble typing, but likely still have the urge to code!
I suspect there are better ways for coding in those scenarios. Like perhaps the kind of word entry that Stephen Hawkin used.
Or if you were to go totally touch UI (I’ve known one individual with motor disabilities use an old touch screen CRT with a stylus in their mouth) then you’d want those blocks a little taller but with something more akin to Miro’s infinite zoom. Rather than a text UI with mouse input.
That said, every project has to start somewhere. So this might be that proof of concept needed to fully flesh out accessibility tweaks.
We can all agree that keyboards are a) large b) necessitate being able to type (need hands) but we don't have that many high-frequency out-of-the-way input methods in computing, generally. I want to see an editor that can a) re-use existing code (libraries, concepts, etc) but that can b) use mouse, keyboard, touchscreen, hand gloves, <insert new input method here>.
Drawing boxes on screens and sticking text inside them is just not impressive, and its clunky to boot. Any worth is whatever input method is provided, and TBH I wouldn't expect this to be any easier on e.g. mobile, probably worse, in fact.
I suspect natural language speech is too slow, but i do sometimes wonder if the only other part of our body capable of such high throughput (and the requisite fine motor control) is our mouth/tongue… I’ve always wondered what a programming language optimized for spoken input would look/sound like.
Edit: mentioned this in another comment, but I think graph based visual programming languages would actually be pretty well suited to speech entry, especially with shorthands for common operations like “create node” := “kurr” or “connect” := “kuh”.
So, yes, input speed is still a problem for reading and understanding code, too.
This video demonstrates how to use LLMs to generate code that can be directly used in PWCT2: https://www.youtube.com/watch?v=Fx--dNZvncc
Whether you type i-f or drag in "if" - it's still coding, right?
PWCT2 is influenced by:
1- Lava visual language - Using the TreeView
2- Forms/3 - Using the Data-Entry forms
3- Scratch - Blocks, Colors, and Translation
4- Visual Basic - Form Designer
5- Envision - Rich comments & Interactive Visualization
6- PWCT1
6.1 Graphical Code Replacement (GCR) method instead of drag and drop
6.2 PWCT1 - The Time Machine (Run programs in the past & Play programs as a movie)
PWCT2 introduces more features compared to PWCT1, such as importing Ring code, inserting steps, and more.
See this is where this falls over.
Syntax highlighting + autocomplete already handle this, and where this actually works against your language, you have the problem every GUI has, inflexibility.
A "coding" language can evolve, structures change, be expressive or verbose. Much like a language.
I doubt your UI can benefit from these abilities, whereas an IDE already provides visual cues etc. that reduce this development time, while preserving the expressiveness of the language.
So, it probably does not reduce the development time, because the tools already solved this problem, and you just have a more clunky verbose, harder to develop language instead.
And the types that aren't coding in the first place are still going to struggle with things like edge cases, etc., so this isn't really democratizing anything either.
It would be interesting if it were more graphical/symbolic, and unlike Scratch (or its derivative Blockly) there aren't any visual cues as regards enforced syntax/structure.
Interesting that they have a link for "Rich Comments (Adding Images)": https://www.youtube.com/watch?v=3yd72YrXxF0
Is it possible to install for Windows w/o using Steam?!?
For those who are curious there is a list of GUI elements which are supported:
https://doublesvsoop.sourceforge.net/pwct2/visualcomponents....
Blueprint in Unreal is the best example where you can try to dig through a tree of thousands of nodes (where hierarchies inherently start to become counter intuitive in at least some cases) or you can just start typing the node names.
And then once you start also specifying the parameters you end up, more or less, typing out the calls anyhow - but with the downside that visual programming rapidly becomes inscrutible at scale with size + flow control/branching.
To run the software without steam, install Ring language (Windows/Linux/macOS) then run it directly from the source code: https://github.com/PWCT/pwct2?tab=readme-ov-file#running-pwc...
In terms of adding more graphical/symbolic elements, we will address this in PWCT3. Additionally, publishing the PWCT2 source code will enable developers to extend the software in various directions based on their vision.
This part would be interesting, I perennially wish I could code on my tablet, but the IDEs there are just too awkward.
The big issue is, I still want a touch-type interface (like a keyboard) so that I can rapidly access many different concepts, but the Android UI is just too constrained by me having to peck slowly one key at a time.
I.e. the problem (there) is not the code, it is the input method. What I really need is the ability to access my idioms with multiple fingers, but without smudging my feedback loop (the mobile screen).
Perhaps some method of inputting code as symbols with a stylus could work better? Although it might still be very awkward...
https://pharo.org/
https://factorcode.org/
https://old.reddit.com/r/programming/comments/1hslkbk/after_...
>After 8 years of development and delivering it to thousands of users, today I am open sourcing my visual programming language.
The discussion there is worth reviewing as well.
What's new and cool about this one? I'm jaded and cynical. I want to be proven wrong.
With respect to Visual Programming Languages, the main contributions of PWCT are:
1- Introducing the Graphical Code Replacement (GCR) method as an alternative to Drag-and-Drop. (Research paper: https://link.springer.com/article/10.1007/s42486-020-00038-y)
2- Using the Time dimension at the program design level, which allows running programs in the past and playing programs as a movie.
3- The first VPL to be used in the development of a Compiler and Virtual Machine for a TPL.
PWCT is influenced by Lava, Forms/3, and Limnor. PWCT2 incorporates more features from other VPLs like Scratch and Envision.
PWCT2 improves upon PWCT by providing a faster, cross-platform environment that supports importing and exporting Ring code. Additionally, the implementation of the environment has been switched from Visual FoxPro to the Ring language. Since PWCT2 supports importing Ring code and is written in Ring, it is a self-hosting VPL. Also, PWCT2 adds the auto-run feature to the Time dimension.
In PWCT2 at the current stage we provide (Free Editor), The (VPL Compiler) will be added to detect errors and the SDE will be added to prevent them as we did in PWCT1.
- Max/MSP - originally for audio/visual synthesis, but widely used for generative art (in the meaning of generative that existed before the AI boom of the 2020s)
- Grasshopper & Dynamo : structural/geometry/architectural generative design
- Modelica/Dymola/OpenModelica etc: symbolic equation modeling and diffeq solving, widely used in HVAC/systems modeling, automotive, aeronautical design
- Modular synthesizers - analog audio synthesis (and these days, digital as well!)
It’s interesting to see that almost all of the examples mentioned above leverage the fact that they are abstractions on top of lower level programming frameworks which are tailored to specific tasks in order to make fantastic, intuitive interfaces that are easy to learn and quick to iterate incredibly complex designs within their domains, while the example linked here is not really domain-specific and is much closer to traditional programming (or something like Scratch)
I really like the idea of parametric CAD design via flow programming - you can go back and change any parameter later. Versus typical 3D modeling or CAD workflows where you basically are working with digital versions of typical shop tools like lathes, routers, extruders, etc where a fillet or cut you made 10 hours ago can't be changed without "rebuilding by hand".
https://www.youtube.com/watch?v=bP-eWFpm-IY
https://www.youtube.com/watch?v=wdaCA8CUoF8
https://www.ussadesign.com/
One of the most interesting parts of grasshopper is how components operate over tree based data structures - ie the equivalent of numpy broadcasting.
At the end of the day though… play. Lots of play! Playing within Grasshopper and just making crazy things is a great way to build intuition.
at the same time, it of course is very helpful to have directional play, ie problem solving, frequently in the form of trying to recreate - or parametrize/generalize/abstract - a specific pre-existing design.
It also helps a lot if you are pretty familiar with modeling in rhino (ie working with construction planes, curves, etc) since a lot of grasshopper worker is just a matter of formally capturing a sequence of operations that you might perform in Rhino into a codified DAG. Unfortunately some of the naming of rhino operations does not align with their corresponding grasshopper components (and the corresponding components might not perfectly align their api with the corresponding rhino commands) but developing that geometric intuition is essential.
There’s also some stuff that is ultimately more efficient to program within python / CSharp / Visual Basic within grasshopper so always keep that in mind… but it can become another layer of complexity (or simplification…) to learn at the same time.
https://www.blockscad3d.com/editor/
https://github.com/derkork/openscad-graph-editor
http://nodezator.com/
Also worth noting is:
https://ryven.org/
How about sticking wires into a breadboard socket to create ones and zeros: that must also not be coding since it's just hooking up a graph.
The term "coding" in computing originally referred to encoding a program in the language of the machine: "machine code". Machine code was probably called that because it is cryptic, not allowing calculations to be specified in ordinary notation and in a machine-independent way.
Programming was the overall activity of designing a program, which could have involved planning it on paper; coding was the manual translation of the design into machine code.
Early programs that translated higher level statements into machine coders were called "automatic coders" and not "compilers", because they helped with the step that was already called "coding". The idea was that when you prepare input for the "automatic coder", in the higher level language like Fortran or whatever, you are programming, but not yet coding: the automatic coder is the thing that is coding for you; i.e. making machine code.
(The term "compiling" also existed, but it had a meaning similar to the dictionary meaning: something like accumulating multiple library functions into a single image. I think, something similar to producing a .a archive from .o files. The way we use "compiling" and "compiler" today doesn't make sense in regard to the ordinary meaning of the word.)
Eventually, writing in a high level language became "coding". The idea is that as soon as you begin steps which communicate your solution to the computer, which the computer will take as-is and execute, you're coding: you're encoding the abstract idea into a form which the machine understands, whether that be assembly language or Prolog.
Those aspects of programming that are not coding have to do with clarifying requirements and planning the solution (and, later, testing and debugging). Once you have clarified the requirements and planned the solution, and have started using a GUI to hook up functional blocks to make it happen, you are coding the solution.
There is no need to go back to the old terminology whereby you are programming the solution, and the "automatic coder" then takes your graph and turns it into computer code. Not even if we think that the terminology was good. It was good for "compiling" to just refer to sticking things into a bundle, rather than translating in a complex way, but we are not going back to that, either.
We will just have to see how the language develops. Maybe developers will prefer not to call any graphical manipulation of functional blocks "coding". Especially if it's in a paradigm where the graph is translated into code that they often look at and perhaps edit, because you then might want the word "code" to unambiguously refer to that code.
When programs too "intelligent" start taking over the machines:
Is this the end of AutoProg?
— The HACTRN