Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support talking to printers via json serial server #8

Open
DanielJoyce opened this issue Jan 27, 2016 · 10 comments
Open

Support talking to printers via json serial server #8

DanielJoyce opened this issue Jan 27, 2016 · 10 comments

Comments

@DanielJoyce
Copy link
Collaborator

No description provided.

@WesGilster
Copy link

Do you have a protocol planned for your json serial server? Just curious as I have a project that has a fully functional backend with quite a bit of print functionality. We could easy implement one of our PrintFileProcessors to to listen to your json and print accordingly.

https://github.com/area515/Creation-Workshop-Host

@DanielJoyce
Copy link
Collaborator Author

Ahh sorry I missed your comment. Was gonna use Serial Port Json Server to talk to some kind of existing board / firmware.

Is there and existing format that you support I can reuse? I could dump a set of images in a zip and some json information. I don't know what existing formats are available

The slicer is all camera tricks, fragment/vertex shaders, and image processing. I am in the initial stages of defining the new advanced slicer, which using these techniques, will support external supports, internal supports, shelling ( possible hollow shells with their own infill pattern ), rafts, etc.

Basically think of it as 'scanline/slice voxelization' where instead of building a octree slice by slice the output is a binary image immediately suitable for printing.

If your 3D card can load it, it can be printed.

@DanielJoyce
Copy link
Collaborator Author

I guess theoretically it should be relatively easy to port the slicer to Java 3D library of some sort.

@jmkao
Copy link
Contributor

jmkao commented Mar 28, 2016

I think there are a few ways we could go, and we have the option to design our own interaction protocol.

CWH (soon to be renamed Photonic3D) has a fairly robust set of motion templates (GCode templates and exposure/lift calculators), as well as a physical resolution calibration tool that can determine the X/Y pixel dimensions & pixel density (pixels/mm) based on an interactive UI.

We also have support today for taking a zip of images and then printing it, which utilizes the motion templates but not the resolution calibration.

Off the top of my head there are a few options (which are not necessarily mutually exclusive, but might be ordered in some kind of evolutionary roadmap):

  • Supply zip of PNGs and user has to ensure calibration parameters are the same
  • Supply zip of PNGs with a JSON file with user-supplied parameters that could be checked to see if calibration parameters are the same
  • Query for calibration parameters, then generate zip of PNGs and include the JSON file to check if the calibration parameters are the same
  • Supply SVG vector file based on physical dimensions that would be rendered to pixels by CWH, using its calibration parameters
  • Query for calibration parameters to build a more accurate build-plate previewer, then generate and supply SVG vector file

I think for graphics card (e.g. WebGL) based acceleration, the browser is a better place to do this than with Java3D. The libraries necessary to get Java3D working in a hardware accelerated fashion on the Raspberry Pi are not very mature (I think a compatible OpenGL implementation was only checked into Jessie in February, and contains some rendering bugs) and don't have the kind of developer manpower and user base that WebGL has.

@DanielJoyce
Copy link
Collaborator Author

So it seems offloading slicing to the managing computers browser might be a
useful feature as well.

Well, could definitely add a "Hosted" mode which could pull printer
information from CWH and use it for slicing, or simply reuse the components
for your own hosted version with some new glue. In the near term I plan on
working on the new slicer and then we can talk more. Do you have a REST api
or websocket protocol for pulling printer information?

On Mon, Mar 28, 2016 at 10:25 AM James Kao [email protected] wrote:

I think there are a few ways we could go, and we have the option to design
our own interaction protocol.

CWH (soon to be renamed Photonic3D) has a fairly robust set of motion
templates (GCode templates and exposure/lift calculators), as well as a
physical resolution calibration tool that can determine the X/Y pixel
dimensions & pixel density (pixels/mm) based on an interactive UI.

We also have support today for taking a zip of images and then printing
it, which utilizes the motion templates but not the resolution calibration.

Off the top of my head there are a few options (which are not necessarily
mutually exclusive, but might be ordered in some kind of evolutionary
roadmap):

  • Supply zip of PNGs and user has to ensure calibration parameters are
    the same
  • Supply zip of PNGs with a JSON file with user-supplied parameters
    that could be checked to see if calibration parameters are the same
  • Query for calibration parameters, then generate zip of PNGs and
    include the JSON file to check if the calibration parameters are the same
  • Supply SVG vector file based on physical dimensions that would be
    rendered to pixels by CWH, using its calibration parameters
  • Query for calibration parameters to build a more accurate
    build-plate previewer, then generate and supply SVG vector file

I think for graphics card (e.g. WebGL) based acceleration, the browser is
a better place to do this than with Java3D. The libraries necessary to get
Java3D working in a hardware accelerated fashion on the Raspberry Pi are
not very mature (I think a compatible OpenGL implementation was only
checked into Jessie in February, and contains some rendering bugs) and
don't have the kind of developer manpower and user base that WebGL has.


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#8 (comment)

Daniel Joyce

The meek shall inherit the Earth, for the brave will be among the stars.

@WesGilster
Copy link

We are in the middle of "swaggerizing" our restful API, and you can follow our progress on that issue(area515/Photonic3D#188) to find out when that will be ready. We currently use Websockets as an event notification system. Even our websocket API has a very restful style where it's designed to connect directly to specific printers and printjobs. Once that socket is connected, you get events for for the object you specified in the URL.

Here is a quick rundown:

  1. The restful API is broken up into 7 services Printables, Machine, Settings, Printers, PrintJobs, Media and Video based ProgressiveDownload.
  2. File uploads allow multipart/form-data or application/octet-stream (Rest based urls)
  3. Regular video supports any content type through progressive download (Rest based but a bit sloppy)
  4. Live video streaming are multipart/x-mixed-replace (Rest based urls)
  5. Image snapshots are image/jpg and some are png (Rest based urls)
  6. Optional SSL and Basic Authentication.

It's very simple to consume.

We are looking at integrating your slicer for two different functions:

  1. Showing the current slice as a model that is currently being printed.
  2. Slicing STL models for printing on the server.

As a part of the second function, I'd like to work out a vector based protocol(SVG?) so we don't have to ship these huge graphics back and forth to the server. I'd also like to work out a standard interface that we can both of us can agree on.

@DanielJoyce
Copy link
Collaborator Author

The whole point of the slicer is to avoid vector slicing. It's rendered
entirely as a raster based image by the gpu and shaders. Its a brutally efficient collection of rendering 'hacks'.

SVG would slow it
down because it requires actual geometry processing. And then you get into all sorts of additional
nasty issues if you want to do shelling.

As for the images, they could be shipped as PNGs or gifs. They are black
and white and would compress to almost nothing.

I made a 2048x2048 black image and drew a bunch of white shapes on it and
saved it as a gif. It came out to 54kb. 100ish kb as a png. There may be
better formats to use.

Its also perfectly possible to run the slicer 'headless' in the dom by
rendering using a off-screen webgl context in a offscreen canvas element
and both sending a copy of the image to save, and blitting the image to a
visible canvas for the user to watch progress.

Tell you what, I'll open a new ticket for tracking this idea. I am a java dev normally, and have gradle experience as well. I can fork or create a new repo to build a component that does what you need.

@DanielJoyce
Copy link
Collaborator Author

see #11

@DanielJoyce
Copy link
Collaborator Author

I just want to say that when WebGL adds support for microtesselation, especially the kind directed by texture/vector maps, then its trivial to support it in the slicer. Users could then use a texture to modify a model, or tell the opengl context to 'smooth' it ( by dividing more via microtesselation ), and the slicer would basically support it automagically.

@WesGilster
Copy link

The raster/vector issue isn't a deal breaker. Just keep in mind, that for the CPU savings we gain in WebGL we loose some in IO transfers, zip compression and zip expansion. I'm still pretty sure there is plenty of performance gain from the client slicing.

You mentioned you were a Java coder, so you might be interested in taking a look at our parallel slicer for STL. It's fully functional and comes with a swing GUI to help debug slicing issues. The general benefit is that you don't have to perform a "preslice" phase and zip the contents before printing. Instead, the slice is performed in parallel to when your gcode commands and motors are printing the previous slice. The positive part to this approach is that the slicer really doesn't need to be that efficient because it just needs to be faster than the motors and exposure time of the previous slice. That's almost always attainable. The other benefit is that since we are in control of low level slicing, we are keeping track of non-manifold geometry and can report that information back to the rest client as errors.

The problem with this approach is that the slicer has to be perfect in order to trust it, and ours needs some work. The other problem with this approach is that you don't have all of the cool features that come with WebGL as you've mentioned.

Let me know if we can be of any help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants