Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sourcery Starbot ⭐ refactored vinothpandian/akin-generator #7

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 1 addition & 3 deletions src/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,4 @@ async def generate_wireframes(
),
):

response = generate_wireframe_samples(ui_design_pattern_type, sample_num=8)

return response
return generate_wireframe_samples(ui_design_pattern_type, sample_num=8)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function generate_wireframes refactored with the following changes:

18 changes: 9 additions & 9 deletions src/attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ def __init__(self, data_format='channels_last', **kwargs):
self.data_format = data_format

def build(self, input_shapes):
self.gamma = self.add_weight(self.name + '_gamma',
shape=(),
initializer=tf.initializers.Zeros)
self.gamma = self.add_weight(
f'{self.name}_gamma', shape=(), initializer=tf.initializers.Zeros
)
Comment on lines -33 to +35
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _Attention.build refactored with the following changes:


def call(self, inputs):
if len(inputs) != 4:
Expand All @@ -42,20 +42,20 @@ def call(self, inputs):
key_tensor = inputs[1]
value_tensor = inputs[2]
origin_input = inputs[3]

input_shape = tf.shape(query_tensor)

if self.data_format == 'channels_first':
height_axis = 2
width_axis = 3
else:
height_axis = 1
width_axis = 2

batchsize = input_shape[0]
height = input_shape[height_axis]
width = input_shape[width_axis]

Comment on lines -45 to +58
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found the following improvement in Function _Attention.call:

if self.data_format == 'channels_first':
proj_query = tf.transpose(
tf.reshape(query_tensor, (batchsize, -1, height*width)),(0, 2, 1))
Expand All @@ -71,11 +71,11 @@ def call(self, inputs):
energy = tf.matmul(proj_query, proj_key)
attention = tf.nn.softmax(energy)
out = tf.matmul(proj_value, tf.transpose(attention, (0, 2, 1)))

if self.data_format == 'channels_first':
out = tf.reshape(out, (batchsize, -1, height, width))
else:
out = tf.reshape(
tf.transpose(out, (0, 2, 1)), (batchsize, height, width, -1))

return tf.add(tf.multiply(out, self.gamma), origin_input), attention
18 changes: 6 additions & 12 deletions src/colorMapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,18 +51,15 @@ def save_color_map(label_color_map, save_image_file):
fontScale = 0.4
fontColor = (0, 0, 0)
lineType = 1
i = 0
for k, v in label_color_map.items():
for i, (k, v) in enumerate(label_color_map.items()):
img[(i * side) : (i * side) + side, 0:side, :] = [v[0], v[1], v[2]]
bottomLeftCornerOfText = (int(side * 1.5), (i * side) + int(side / 2))
bottomLeftCornerOfText = int(side * 1.5), i * side + side // 2
img = cv2.putText(img, str(k), bottomLeftCornerOfText, font, fontScale, fontColor, lineType)
i += 1
Comment on lines -54 to -59
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ColorMapper.save_color_map refactored with the following changes:

cv2.imwrite(save_image_file, img)

@staticmethod
def get_sorted_colors(colors, max_i):
sorted_color = []
sorted_color.append(colors.pop(max_i))
sorted_color = [colors.pop(max_i)]
Comment on lines -64 to +62
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ColorMapper.get_sorted_colors refactored with the following changes:

metric = "euclidean"
while len(colors) > 0:
n = np.array(sorted_color)
Expand All @@ -78,16 +75,13 @@ def map_colors_to_labels(sorted_colors, sorted_labels):
print(len(sorted_colors))
print(len(sorted_labels))
assert len(sorted_colors) == len(sorted_labels)
label_map = {}
for i, label in enumerate(sorted_labels):
label_map[label] = sorted_colors[i]
return label_map
return {label: sorted_colors[i] for i, label in enumerate(sorted_labels)}
Comment on lines -81 to +78
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ColorMapper.map_colors_to_labels refactored with the following changes:


@staticmethod
def save_label_map(label_map, file):
with open(file, "w+") as f:
for k, v in label_map.items():
s = k + "," + str(v[0]) + "," + str(v[1]) + "," + str(v[2]) + "\n"
s = f"{k},{str(v[0])},{str(v[1])},{str(v[2])}" + "\n"
Comment on lines -90 to +84
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ColorMapper.save_label_map refactored with the following changes:

f.write(s)

@staticmethod
Expand All @@ -105,7 +99,7 @@ def read_label_color_map(file, bgr=False):
color = [int(s[1]), int(s[2]), int(s[3])]
label_color_map[label] = color
else:
print(str(file) + " file does not exists")
print(f"{str(file)} file does not exists")
Comment on lines -108 to +102
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ColorMapper.read_label_color_map refactored with the following changes:

return label_color_map

@staticmethod
Expand Down
13 changes: 4 additions & 9 deletions src/get_annotations.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,7 @@ def threshold(img):
m2 = sub_threshold(img[:, :, 1], True, True)
m3 = sub_threshold(img[:, :, 2], True, True)

res = cv2.add(m1, cv2.add(m2, m3))
return res
return cv2.add(m1, cv2.add(m2, m3))
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function threshold refactored with the following changes:



def erode(thresh):
Expand All @@ -63,8 +62,7 @@ def erode(thresh):
def unsharp(imgray):
imgray = imgray.copy()
gaussian = cv2.GaussianBlur(imgray, (7, 7), 10.0)
unsharp_image = cv2.addWeighted(imgray, 2.5, gaussian, -1.5, 0, imgray)
return unsharp_image
return cv2.addWeighted(imgray, 2.5, gaussian, -1.5, 0, imgray)
Comment on lines -66 to +65
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function unsharp refactored with the following changes:



def get_nearest_dominant_color(img):
Expand Down Expand Up @@ -112,12 +110,10 @@ def get_wireframe(i, image, category):

height, width, _ = original.shape

wireframe: WireframeSchema = WireframeSchema(
return WireframeSchema(
id=str(i), width=width, height=height, objects=objects
)

return wireframe
Comment on lines -115 to -119
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_wireframe refactored with the following changes:



def get_category_value(category: UIDesignPattern):
if category == UIDesignPattern.login:
Expand All @@ -140,5 +136,4 @@ def generate_wireframe_samples(category: UIDesignPattern, sample_num=16, z_dim=1
c = tf.reshape(c, [sample_num, 1])
samples = GEN([z, c])[0].numpy()
images = np.array([resize_screen(x, cv2.INTER_NEAREST) for x in samples])
wireframes = [get_wireframe(i, image, category) for i, image in enumerate(images)]
return wireframes
return [get_wireframe(i, image, category) for i, image in enumerate(images)]
Comment on lines -143 to +139
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function generate_wireframe_samples refactored with the following changes:

17 changes: 8 additions & 9 deletions src/postProcessing.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,8 @@ def threshold(img):
m2 = sub_threshold(img[:, :, 1], 2, True, True) # --- threshold on green channel
m3 = sub_threshold(img[:, :, 2], 3, True, True) # --- threshold on red channel

# --- adding up all the results above ---
res = cv2.add(m1, cv2.add(m2, m3))
# cv2.imwrite("image_thresh.png", res)
return res
return cv2.add(m1, cv2.add(m2, m3))
Comment on lines -31 to +32
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function threshold refactored with the following changes:

This removes the following comments ( why? ):

# --- adding up all the results above ---



def erode(thresh, st):
Expand All @@ -45,10 +43,9 @@ def unsharp(imgray, st):
# Unsharp mask here
imgray = imgray.copy()
gaussian = cv2.GaussianBlur(imgray, (7, 7), 10.0)
unsharp_image = cv2.addWeighted(imgray, 2.5, gaussian, -1.5, 0, imgray)
# cv2.imwrite("unsharp_"+str(st)+".jpg", unsharp_image)

return unsharp_image
return cv2.addWeighted(imgray, 2.5, gaussian, -1.5, 0, imgray)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function unsharp refactored with the following changes:



def get_bounding_boxes(dir, image_name, dst_path, dir_name):
Expand All @@ -69,9 +66,11 @@ def get_bounding_boxes(dir, image_name, dst_path, dir_name):
continue
cv2.rectangle(new_semantic, (x, y), (x + w, y + h), dominant_color, 3)
elements.append({"points": [[x, y], [x + w, y + h]], "label": label})
cv2.imwrite(os.path.join(dst_path, image_name[:-4] + "0.png"), image)
cv2.imwrite(os.path.join(dst_path, image_name[:-4] + "1.png"), new_semantic)
create_json_file(os.path.join(dst_path, image_name[:-4] + ".json"), elements, dir_name)
cv2.imwrite(os.path.join(dst_path, f"{image_name[:-4]}0.png"), image)
cv2.imwrite(os.path.join(dst_path, f"{image_name[:-4]}1.png"), new_semantic)
create_json_file(
os.path.join(dst_path, f"{image_name[:-4]}.json"), elements, dir_name
)
Comment on lines -72 to +73
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_bounding_boxes refactored with the following changes:



def get_nearest_dominant_color(img):
Expand All @@ -91,7 +90,7 @@ def create_json_file(path, elements, flag):
"imageWidth": 360,
"flags": {flag: True}
}
if data is not None and len(data) > 0:
if data is not None and data:
Comment on lines -94 to +93
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function create_json_file refactored with the following changes:

with open(path, "w+") as ff:
json.dump(data, ff, indent=True)

Expand Down
62 changes: 27 additions & 35 deletions src/prototypeGenerator.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,17 +19,14 @@ def load_all_ui_images():
label = s[0]
img_name = s[1]
if len(img_name) > 0:
img_path = os.path.join(android_elements_base_path, img_name + ".jpg")
img_path = os.path.join(android_elements_base_path, f"{img_name}.jpg")
if not os.path.exists(img_path):
img_path = os.path.join(android_elements_base_path, img_name + ".png")
img_path = os.path.join(android_elements_base_path, f"{img_name}.png")
img = cv2.imread(img_path, cv2.IMREAD_COLOR)
else:
img = None
text = s[2]
if text is None or len(text) == 0:
text = None
else:
text = text.strip().split(",")
text = None if text is None or len(text) == 0 else text.strip().split(",")
Comment on lines -22 to +29
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function load_all_ui_images refactored with the following changes:

resize = int(s[3])
android_label_map[label] = {"img": img, "text": text, "resize": resize, "label": label}
return android_label_map
Expand All @@ -43,15 +40,14 @@ def get_elements(path, real):
data = json.load(f)
if real:
return SemanticJsonParser.read_json(data, label_hierarchy_map)
else:
shapes = data["shapes"]
flags = data["flags"]
for shape in shapes:
label = shape["label"]
points = shape["points"]
elements.append(
[label, [int(points[0][0]), int(points[0][1]), int(points[1][0]), int(points[1][1])]]
)
shapes = data["shapes"]
flags = data["flags"]
for shape in shapes:
label = shape["label"]
points = shape["points"]
elements.append(
[label, [int(points[0][0]), int(points[0][1]), int(points[1][0]), int(points[1][1])]]
)
Comment on lines -46 to +50
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_elements refactored with the following changes:

except Exception as e:
print(e)
return elements
Expand All @@ -71,9 +67,9 @@ def element_resize(img, w, h, flag, base_shade):
"""
if flag == 0:
return img, 0
elif flag == 1 or flag == 5:
elif flag in [1, 5]:
return cv2.resize(img, (w, h)), 0
elif flag == 2 or flag == 3 or flag == 4:
elif flag in [2, 3, 4]:
Comment on lines -74 to +72
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function element_resize refactored with the following changes:

  • Replace multiple comparisons of same variable with in operator [×2] (merge-comparisons)

label_image = np.ones((h, w, 3)) * base_shade
ih = img.shape[0]
iw = img.shape[1]
Expand Down Expand Up @@ -103,9 +99,9 @@ def element_resize_old(img, w, h, flag, base_shade):
"""
if flag == 0:
return img, 0
elif flag == 1 or flag == 5:
elif flag in [1, 5]:
return cv2.resize(img, (w, h)), 0
elif flag == 2 or flag == 3 or flag == 4:
elif flag in [2, 3, 4]:
Comment on lines -106 to +104
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function element_resize_old refactored with the following changes:

  • Replace multiple comparisons of same variable with in operator [×2] (merge-comparisons)

label_image = np.ones((h, w, 3)) * base_shade
ih = img.shape[0]
iw = img.shape[1]
Expand Down Expand Up @@ -170,14 +166,14 @@ def create_img(elements, dst_file_path, cat, real=True):
y1 = int(bb[1])
x2 = int(bb[2]) - 1
y2 = int(bb[3]) - 1
x = x1
y = y1
w = x2 - x1
h = y2 - y1
if x1 >= 0 and y1 >= 0 and x2 < img_w and y2 < img_h and w > 0 and h > 0:
x = x1
y = y1
if not real and (h < 20 or w < 20):
continue
elif h <= 0 or w <= 0 or y >= img_h or x >= img_w:
elif y >= img_h or x >= img_w:
Comment on lines -173 to +176
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function create_img refactored with the following changes:

continue
if label == "name" and cat == "product_listing":
label = "filter"
Expand All @@ -199,14 +195,14 @@ def create_img(elements, dst_file_path, cat, real=True):
base_shade = 224
# print(label)
label_image, fw = element_resize(label_image, w, h, label_resize, base_shade)
if label == "image" or label == "icon":
if label in ["image", "icon"]:
cv2.line(label_image, (0, 0), (w - 1, h - 1), (79, 79, 79), thickness=1)
cv2.line(label_image, (0, h - 1), (w - 1, 0), (79, 79, 79), thickness=1)
if label_resize == 4:
fw = 0
if label_text is not None:
text = label_text[0]
if label in element_counted.keys():
if label in element_counted:
c = element_counted[label]
if len(label_text) > c:
text = label_text[c]
Expand All @@ -215,7 +211,7 @@ def create_img(elements, dst_file_path, cat, real=True):
label_image, fw = element_resize(base_label_image, w, h, flag=2, base_shade=189)
try:
base_image[y : y + h, x : x + w, :] = label_image
if label in element_counted.keys():
if label in element_counted:
element_counted[label] += 1
else:
element_counted[label] = 1
Expand Down Expand Up @@ -248,12 +244,11 @@ def find_font_scale_pil(fontScale, h, label_text, w, reduce_text):


def reduce_text_size(text):
if len(text) - 4 >= 5:
new_length = len(text) - 4
r = text[0:new_length]
return r, True
else:
if len(text) < 9:
return text, False
new_length = len(text) - 4
r = text[:new_length]
return r, True
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function reduce_text_size refactored with the following changes:



def add_text_pil(label_image, label_text, align, label_resize, fw):
Expand All @@ -269,10 +264,7 @@ def add_text_pil(label_image, label_text, align, label_resize, fw):
label_image = label_image.astype(np.uint8)
h = label_image.shape[0]
w = label_image.shape[1]
if label_resize == 3 and fw > 0 and w > fw:
accesible_w = w - fw
else:
accesible_w = w
accesible_w = w - fw if label_resize == 3 and fw > 0 and w > fw else w
Comment on lines -272 to +267
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function add_text_pil refactored with the following changes:

font, textsize, label_text, fontScale = find_font_scale_pil(fontScale, h, label_text, accesible_w, reduce_text)
if label_resize == 3 and fontScale < 12:
return label_image, True
Expand Down Expand Up @@ -345,7 +337,7 @@ def add_text_pil(label_image, label_text, align, label_resize, fw):
dst_folder_path = os.path.join(dst_folder, dir)
if not os.path.exists(dst_folder_path):
os.mkdir(dst_folder_path)
dst_file_path = os.path.join(dst_folder_path, str(count) + "_" + str(real) + ".jpg")
dst_file_path = os.path.join(dst_folder_path, f"{str(count)}_{str(real)}.jpg")
Comment on lines -348 to +340
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 348-348 refactored with the following changes:

# dst_file_path = os.path.join(dst_folder, file[:-5]+".jpg")
elements = get_elements(json_path, real)
try:
Expand Down
6 changes: 3 additions & 3 deletions src/sagan_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ def create_generator(image_size=64, z_dim=100, filters=64, kernel_size=4, num_of

x, attn1 = SelfAttnModel(curr_filters)(x)

for i in range(repeat_num - 4):
for _ in range(repeat_num - 4):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function create_generator refactored with the following changes:

curr_filters = curr_filters // 2
x = SpectralConv2DTranspose(filters=curr_filters, kernel_size=kernel_size, strides=2, padding="same")(x)
x = tf.keras.layers.BatchNormalization()(x)
Expand All @@ -54,14 +54,14 @@ def create_discriminator(image_size=64, filters=64, kernel_size=4, num_of_catego
x = tf.keras.layers.concatenate([input_layers, y])
curr_filters = filters
# x = input_layers
for i in range(3):
for _ in range(3):
curr_filters = curr_filters * 2
x = SpectralConv2D(filters=curr_filters, kernel_size=kernel_size, strides=2, padding="same")(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)

x, attn1 = SelfAttnModel(curr_filters)(x)

for i in range(int(np.log2(image_size)) - 5):
for _ in range(int(np.log2(image_size)) - 5):
Comment on lines -57 to +64
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function create_discriminator refactored with the following changes:

curr_filters = curr_filters * 2
x = SpectralConv2D(filters=curr_filters, kernel_size=kernel_size, strides=2, padding="same")(x)
x = tf.keras.layers.LeakyReLU(alpha=0.1)(x)
Expand Down
Loading